Linear Collider Forum

Today's Messages (off)  | Unanswered Messages (on)

Forum: Mokka
 Topic: New Mokka release mokka-04-02
New Mokka release mokka-04-02 [message #280] Fri, 24 June 2005 09:10
Messages: 57
Registered: February 2004
Dear Friends,

We are glad to announce a new Mokka release. The new mokka-04-02
tag is available for download from the Mokka Web page at

Gabriel Musat
(from release notes:)
------------------------------------------------------------ -

What is new in this Mokka release

I. New detector model TB05
II. Gaussian smearing of the starting point and momentum of the (gaussian) gun
III. Bug fixes


I. New detector model TB05

Thanks to Fabrizio Salvatore from Royal Holloway College, University of London
(, this new detector model contains, besides all sub-
detectors from model TB04 (Ecal, Hcal and Tail Catcher), the drift chambers
implementation. Like in model TB04, the configuration angle is accessed
by the means of the setup. Both ASCII and LCIO output is supported.

II. Gaussian smearing of the starting point and momentum

Thanks to George Mavromanolakis from University of Cambridge, Cavendish
Laboratory, Mokka performs gaussian smearing of gun position and momentum.
This is useful to simulate more realistically the beam at the calice
testbeam which was not coming from a fixed point. The particleGun and
/gun/position, /run/beamOn are replaced by the commands /generator/gaussgun
(to give the mean xyz starting point and the respective sigmas (in mm) and
how many particles to shoot), and generator/gaussgun/momentum (to give the
momentum and sigma in MeV/c):
/generator/gaussgun X sigmaX Y sigmaY Z sigmaZ Nevents
/generator/gaussgun/momentum momentum sigma
George chose to have momentum smearing instead of energy smearing because
this is the correct physical process in a beamline.

III. Bug fixes

Frank Gaede from DESY ( checked in two fixes:

1. Files Mokka/source/Geometry/CGA/include(src)/MySQLWrapper.hh were changed
to allow Mokka to compile and link with MySQL 4.x

2. In file Mokka/source/Kerne/GNUmakefile some libraries were added for
building Mokka with granular libraries.

A bug was fixed in file Mokka/source/Geometry/Tesla/src/ in
order to avoid a crash that occured very rarely while using geant4-07-00-ref-02 due to
an uninitialized pointer of the post step point physical volume.

 Topic: new DCH driver in Mokka/tbeam
new DCH driver in Mokka/tbeam [message #278] Thu, 09 June 2005 06:58
Messages: 15
Registered: May 2005
Location: Royal Holloway University...

I've implemented the driver to simulate the 4 drift chambers that are present in the 2005 test beam setup at Desy.
The driver will be included in the next Mokka release.


 Topic: central mysql and cvs servers will be down Monday, April 4 2005
central mysql and cvs servers will be down Monday, April 4 2005 [message #207] Fri, 01 April 2005 04:04
Messages: 57
Registered: February 2004
Dear Mokka users,

For electrical maintenance reasons, the central Mokka mysql and cvs
servers will be down on Monday, April 4, 2005 from 7 AM to 6 PM
Western European Time.

Sory for the problems that this will cause.

Gabriel and Paulo
 Topic: Are there any sample root macro scripts that makes n-tuple with Mokka?
question.gif  Are there any sample root macro scripts that makes n-tuple with Mokka? [message #204] Tue, 29 March 2005 01:44
Messages: 48
Registered: February 2004
Location: L.L.R. - Ecole polytechni...

Dear Han,

Unfortunately I don't use Root myself, so I could not help you about this subject. But I'm posting your request on the Mokka "Linear Collider Forum" ( in such way you could get help from other Mokka users using also Root. I advice you to sign to this Forum as it's a common place to discuss and to get help about Mokka and other Linear Collider available softwares.


Hi, I'm graduate students of Kyungpook National University in Korea.
Our Lab are doing a research on EM-Calorimeter for LC.

We are using Mokka for the simulation of the detector, and we setup
Mokka on our Simulation Node including our own MySQL Server ( we dumped
the db from

For the beginning, I want to make a n-tuple files using sample ascii
output files by root.
I could produce some ascii output files by using "Mokka -o [Dir] -h
[OurHostName] -m [macrofile]"

Are there any sample root macro files for this?

Thank you in advance and have a good day.
Kookhee Han <>
Kyungpook National University
 Topic: slab shifts of the Ecal prototype changed
slab shifts of the Ecal prototype changed [message #167] Thu, 02 December 2004 07:43
Messages: 57
Registered: February 2004
Dear Friends,

The slab shifts of the central database of the Calice Ecal
prototype were changed again to allow different configuration
angles of the Ecal.

Cheers, Gabriel
 Topic: new Mokka release mokka-03-04
new Mokka release mokka-03-04 [message #162] Wed, 01 December 2004 05:45
Messages: 57
Registered: February 2004
Dear friends, a new Mokka release (mokka-03-04) is available from the Mokka Web page at ll.html

What is new in this Mokka release

I. Three small Corrections in Test Beam implementation.
II. Corrections in the Calice Ecal prototype implementation
III. A new model for demonstration purposes
IV. New Driver for TPC
V. Minor bug fixes/improvements


I. Three small Corrections in Test Beam implementation.

models in source/Geometry/tbeam

a) Correction of the position of the scintillating tiles in the
Hcal (was wrong by 8 mm) which lead to an additional small
air gap
b) Oxygen was defined with z=16
c) The position of the Hcal as calculated in the catcher driver
was wrong by 0.115/2 mm.

II. Corrections in the Calice Ecal prototype implementation

The individual slabs of the Calice Ecal prototype are now allowed
to move out of the three structures (modules).

As the prototype is implemented now, the slab shifts are the same
for any configuration angle. In angular configurations only the
three modules are shifted in such a manner that the beam axis
enters every module by the center of its first slab.

III. A new model for demonstration purposes

The new mokka release features a simple calorimeter model to
be used for an introduction to Mokka within the LC-Software work-
shop at DESY, Dec. 04.

The model is called WSCal and holds as the only ingredient the
driver going along with the sensitive detector method (and the corresponding include files)

IV. New Driver for TPC

A new driver for the TPC (by Ties Behnke) has been added that improves
the geometry of the TPC as described in the TDR. In particular the number of
layers is now 200 as oposed to 137.
Use new models D10 and D10scint (FeScintillatorHCal) to get the latest version
of the Tesla detector.

V. Minor bug fixes/improvements

a) Modified source/Kernel/GNUmakefile for sites that don't have mysql in
/usr/lib user need to set MYSQL_PATH.

b) mokka.steer:
added comment that PythiaFileName is not yet implemented
added /Mokka/init/lcioWriteMode WRITE_NEW/WRITE_APPEND to allow
overwriting of existing LCIO file

c) ask for ending .HEPEvt or .stdhep for pythia files (was HepEvt)

 Topic: central Mokka mysql and cvs servers are down
central Mokka mysql and cvs servers are down [message #161] Fri, 26 November 2004 02:32
Messages: 57
Registered: February 2004
Dear Mokka users,

For maintenance reasons, the central Mokka mysql and cvs servers
are down from Friday November 26 2:00 PM to Monday November 29 9::00 AM.

Sory for the problems that this will cause.

Gabriel and Paulo
 Topic: Mokka branch with timing information
Mokka branch with timing information [message #109] Tue, 15 June 2004 08:37
Messages: 233
Registered: January 2004
Location: DESY, Hamburg
Dear all,

as it turned out at the LCWS in Paris it is crucial to understand the timing structure of the events in the LC-Detector.
I have created a branch tag 'Mokka-03-00-dev-fg' with a Mokka version that writes timing information in the LCIO files (SimCalorimeterHit and SimTrackerHit) for most of the current sensitive detectors.
In order to produce meaningful results a current version of LCIO
is needed as well (e.g. the current HEAD or 'v01-01beta_p01' ).

Paulo and Gabriel, let me know if you want me to merge the branch with the current HEAD of the CVS in order to have the timing information with the next release of Mokka.

 Topic: new 3.0 Mokka major release
icon7.gif  new 3.0 Mokka major release [message #76] Mon, 19 April 2004 07:46
Messages: 48
Registered: February 2004
Location: L.L.R. - Ecole polytechni...
Dear Friends,

We are glad to announce for the LCWS04 a Mokka major release. For the first time it includes code developed elsewhere and committed directly into the Mokka CVS repository in coordination with us. A big thank to Frank Gaede (DESY), Jeremy McCormick (NICADD) and Gabriel Musat (L.L.R.) for making it a reality. For this reason it's a major release, let's call it the "LCWS04 Mokka release". Smile

The new mokka-03-00 tag is available for download from the Mokka Web page at

Cheers, Paulo Mora de Freitas.

(from release notes:)
------------------------------------------------------------ -

What is new in this Mokka release

A) Mokka Kernel improvements by Frank Gaede (DESY):

I. Steering files as alternative to command line parameters
II. Plugins for user analysis/checkplots during simulation
III. Definition of user variables in steering files
IV. Using default physics lists in geant4 installation
V. Factory for all default physics lists in geant4 installation
VI. Environment variable for MySQL installation

B) Mokka Kernel improvements (L.L.R.):

VII. New output file format for calorimeter hits
VIII. Fixed G4UnionSolid navigation

C) Detector models improvements

X. New test beam model "TB00" by Jeremy McCormick (NICADD)
XI. Changes in visualization attributes for HCal and TPC (L.L.R.)

------------------------------------------------------------ ---

A) Mokka Kernel improvements by Frank Gaede (DESY)

I. Steering files as alternative to command line parameters

Users can now optionally use a steering file to control the running of Mokka.
The steering file holds geant4 commands of the form
/Mokka/init/parameterName parameterValue
To get an overview of the available commands type 'ls /Mokka/init' at the Mokka command prompt.
To start Mokka with a steering file just provide the name of the file as only argument:

$G4WORKDIR/bin/$G4SYSTEM/Mokka mokka.steer

All command line steering options are available as geant4 commands.
Running Mokka with command line parameters works as before.
See mokka.steer for example.

II. Plugins for user analysis/checkplots during simulation

Users can define their own plugins that have action methods at
To define your own plugin inherit from Plugin and define the relevant action methods.
Your plugin must have:
- a constructor of the form
MyPlugin::MyPlugin(const char* name) : Plugin( name) { ... }
- have one global instance linked to the binary, you can use the macro
in your .cc file for that.
Execution of plugins is controlled via the steering command:
/Mokka/init/registerPlugin MyPlugin
Plugins are called in the order they appear in the steering file.
See Plugin/src/ for example code. This requires JAIDA/AIDAJNI installed.
Modify source/Plugin/ to set your environment as needed.

Define your plugin in source/Plugin/include and source/Plugin/src then it will be
automatically linked to the Mokka binary.
If you cannot define your plugin in source/Plugin/(include,src) you have to make sure
that the object gets linked in, i.e. modify the makefile accordingly.

There is an example plugin Checkplots that produces AIDA histograms.
In order to use this you have to install JAIDA and AIDAJNI and set G4ANALYSIS_USE
otherwise unset G4ANALYSIS_USE (in order to net get compile time errors).

III. Definition of user variables in steering files

Users can define their own named variables of type string, double [with unit] and int.
Use the commands:
/Mokka/init/userInitString MyPluginFilename MyPlugin.aida
/Mokka/init/userInitDouble MyCutEnergy 1000 keV
/Mokka/init/userInitDouble MyPi 3.14159
/Mokka/init/userInitInt NBins 100
to define the variables in the steering file.

Variables are accessed via the class 'UserInit', e.g.
#include UserInit.hh"
std::string fileName = UserInit::getInstance()->getString("MyPluginFilename") ;
double myCutEnergy = UserInit::getInstance()->getDouble("MyCutEnergy") ;
int nBins = UserInit::getInstance()->getInt("NBins") ;
anywhere in the code.

IV. Using default physics lists in geant4 installation

Makefiles have been modified to use the default physics lists provided with
the standard Geant4 installation -> this requires geant4.6.0 or higher.

V. Factory for all default physics lists in geant4 installation

Users can activate any of the default physics lists in geant4 installation
via a steering file command:
/Mokka/init/physicsListName QGSP_BERT
default is 'PhysicsList' which has been used in Mokka so far.

INSTALLATION NOTE : all the available Geant4 physics list librairies have
to be built before compiling this Mokka release. For that, you have to change
directory to


and to run the gmake command.

VI. Environment variable for MySQL installation

Users can set the environment variable MYSQL_PATH in case their mysql installation
is not under /usr/lib. No modification of Makefiles necessary.

B) Mokka Kernel improvements by L.L.R. :

VII. New output file format for calorimeter hits (ASCII and LCIO)

Following the LCIO conventions, the hits output format for
calorimeters changed. This new format is driven by a new generic calorimeter
hit class, the CalHit. The old CellHit is kept for backward compatibility.
New developments should adopt CalHit, old geometry drivers should
migrate to CalHit to be compliant with the new hit output file format for

This new format keeps just an entry by touched calorimeter cell. The partial
contributions by PID and, inside it, by PDG are attached to the cell entry
in the file as described in the LCIO documentation.

For ASCII files, the new format is :

The output file format for calorimeters become for each line:


P = detector piece number:
1 = Ecal end cap -Z
2 = Ecal barrel
3 = Ecal end cap +Z
4 = Hcal end cap -Z
5 = Hcal barrel
6 = Hcal end cap +Z
S = stave number (1-8 for barrel, 1-4 for end caps)
M = module number in stave (1-5 for barrel, 1 for end caps)
About the end caps: each end cap is composed by 4 staves, each
stave has 1 module.
I,J = the cell coordinates in the cells matrix ( I, J >= 0)
K = Sensitive (Si or scintillator) layer number (K >= 1)
Be careful: I,J,K is just the index inside the module. To address
absolutely the cell in the detector you have to specify all the
(P,S,M,I,J,K) values.
X,Y,Z = the cell center in world coordinates
E = the total energy deposited in the cell by the PID particle and
its secondaries.
CellId = encoded cellId (for CGA)
CGAFLAG = internal CGA flag (for CGA)
nPID = number of primary particle partial contributions following
this line

Just following each cell summary line, nPID details are written with
the PID partial contributions, with the format :


PID = primary particle id in the Pythia file.
EPID = partial energy contribution from this PID to this cell
nPDG = number of PDG partial contributions following
this line.

and nPDG lines with the format :


PDG = particle type (electron, positron, etc)
EPDG = partial PDG energy contribution from the above PID.

VIII. Fixed G4UnionSolid navigation

Thanks to a G4Navigator bug fix available since the Geant4 6.1 release
the turn around to intercept the G4UnionSolid failures and to reset
the G4Navigator is over.

C) Detector models improvements by Jeremy McCormick (NICADD)

X. A new test beam model, the "TB" one, including a Hcal prototype
with GEM. A complete description can be found at

XI. Changes in visualization attributes for HCal and TPC

To go faster with visualization when running with Tesla models,
the visualization attributes for the Hcal and TPC envelopes are
now set to "Daughters Invisible".
Forum: Tracking & Vertexing
 Topic: wrong track direction...? PLEASE IGNORE!
wrong track direction...? PLEASE IGNORE! [message #2350] Fri, 16 January 2015 00:22
Messages: 22
Registered: November 2012
No Message Body

[Updated on: Fri, 16 January 2015 00:44]

 Topic: Problem with PerEventIPFitterProcessor
Problem with PerEventIPFitterProcessor [message #2170] Thu, 24 March 2011 05:02
Hajrah Tabassam
Messages: 6
Registered: March 2011

Hi Experts

There is a problem which I am getting while I am using Marlin. I am trying to run the LCFIVertex package on my samples but getting
this error:

[ VERBOSE "MyPerEventIPFitterProcessor"] A runtime error has occured : lcio::ReadOnlyException: LCCollectionVec::addElement
[ VERBOSE "MyPerEventIPFitterProcessor"] the program will have to be terminated - sorry.
I tried to use LCFIVertex package without this processor and it is working fine.

Is there any suggestion how to solve this problem and second thing what is difference between the default IPVertex and the one reconstructed by PerEventIPFitterProcessor?

 Topic: building LCFIVertex with AIDAJNI
building LCFIVertex with AIDAJNI [message #2078] Wed, 29 September 2010 14:15
Messages: 9
Registered: November 2009

I want to build LCFIVertex package with AIDAJNI. I have ilcsoft version 01-08-01 so trying to install AIDAJNI 3.2.3 there using ilcsoft-install. It says that build is successful

Total time: 7 seconds
sh: gmake: command not found
sh: gmake: command not found
Traceback (most recent call last):
File "/var/autofs/nfs/rawcmos11/voutsi/v01-08-01/ilcsoft-install", line 72, in ?
File " /var/autofs/nfs/rawcmos11/voutsi/v01-08-01/ilcsoft/ilcsoft.p y ", line 423, in makeinstall
File " /var/autofs/nfs/rawcmos11/voutsi/v01-08-01/ilcsoft/baseilc.p y ", line 758, in install
File " /var/autofs/nfs/rawcmos11/voutsi/v01-08-01/ilcsoft/aidajni.p y ", line 71, in compile
os.system( 'tar -xzf %s-%s-'+self.os_ver.type+'-g++.tar.gz' % (self.alias, self.version) )
TypeError: not all arguments converted during string formatting

So I don't really understand if is successful or not. Then trying to build LCFIVertex with

-- Check for AIDAJNI: /rawcmos11/voutsi/ilcsoft/AIDAJNI/3.2.3
-- Java version 1.6.0 configured successfully!
-- Check for JAIDA: /rawcmos11/voutsi/ilcsoft/JAIDA/3.2.3 -- works
-- Check for AIDAJNI: /rawcmos11/voutsi/ilcsoft/AIDAJNI/3.2.3 -- failed to find AIDAJNI AIDAJNI library!!
-- Check for AIDAJNI: /rawcmos11/voutsi/ilcsoft/AIDAJNI/3.2.3 -- failed to find AIDAJNI FHJNI library!!
CMake Error at /rawcmos11/voutsi/ilcsoft/CMakeModules/v01-08-01/FindAIDAJNI .cmake:234 (MESSAGE):
Check for AIDAJNI: /rawcmos11/voutsi/ilcsoft/AIDAJNI/3.2.3 -- failed!!
Call Stack (most recent call first):
/rawcmos11/voutsi/ilcsoft/CMakeModules/v01-08-01/MacroLoadPa ckage.cmake:103 (FIND_PACKAGE)
/rawcmos11/voutsi/ilcsoft/CMakeModules/v01-08-01/MacroCheckD eps.cmake:36 (LOAD_PACKAGE)
CMakeLists.txt:267 (CHECK_DEPS)

So seems like something went wrong at AIDAJNI install.. Have already install java and JAIDA. The path to the AIDAJNI library is also defined correctly.

Any comment is very welcomed,

Thanks a lot

 Topic: LCTPC Conditions Database
LCTPC Conditions Database [message #2032] Wed, 11 August 2010 07:04
Messages: 43
Registered: August 2007
Location: DESY Hamburg
Just to inform everyone:

After a bit of delay, the conditions database for MarlinTPC is partly online:
so far a database for developers is online on the server. Using it you can test code needing conditions data or store conditions data used in your test cases.

This database also serves as kind of a test case for the server setup in general, so please make use of it Smile

The Large Prototype database will go online a bit later.

If you want access or want a separate database for your own small prototype on the central server, please contact me.

Cheers, Ralf.
 Topic: Questions about fitting in MarlinTPC
Questions about fitting in MarlinTPC [message #1976] Fri, 07 May 2010 01:30
Messages: 125
Registered: July 2005
Location: CERN

Christoph had a few questions about the track fitting in MarlinTPC and we both
think they are of general interest. So I answer them here.


1.) Is there a "simple" chi^2 fit processor for straight lines?

You can use the TrackFitterSimpleChiSquareProcessor and fix the curvature to 0. Use the parameters OmegaStart=0 and FixOmega=true.
An other alternative is the LinearRegressionProcessor. The linear regression is equivalent to a chi^2 minimisation with all errors being 1.


2.) A chi^2 helix fitter seems to be implemented in TrackFitterSimpleChiSquareProcessor. Why are defocussing and diffusion parameters of the fit? To fit a function one only needs the functions, its parameters and errors. Why are there other parameters, which are not part of the fit?

The SimpleChiSquare fitter does not use the errors of the hits (for instance to be able to run with the current hit finders which do not set the errors. At least the TopoFinder does not).
In a first version all errors were set to 1, but it turned out that the errors in xy and z can be very different, and they both depend on z.

The SimpleChiSquare fitter estimates the errors as
sigma_xy = sqrt( TransDefocussing^2 + TransDiffusionCoef^2 * z )
sigma_z  = sqrt( LongDefocussing^2  + LongDiffusionCoef^2  * z )

where z is the drift distance. The names are misleading. Originally the idea was to use defocussing and diffusion coefficients from the conditions data. But it turned out that calculating the required values from them is not straight forward.
So "Defocussing" is the intrinsic detector resolution at zero drift, and "Diffusion" is the drift distance (diffusion) dependent part of the resolution.

The documentation is definitely wrong, and we probably should also change the names to be clear.


3.) Is it corrent that neither dEdx (and its error) nor the covariance matrix are stored, or not even calculated?

Unfortunately yes. dEdx is not calculated at all. The covariance matrix is available in the Minuit fit, but it is not stored. The fitter is still alpha and got stuck in the phase when I was struggling that the fit converged at all. In the end it turned out that there was a problem with the start parameters and the pattern recognition (track finding). So the fit should converge, but I did not have the time to finish it yet.


4.) The base class / base fitter is TrackFitterSimpleChiSquare. Is it correct that some things are not cleanly implemented, i. e. hard-coded?

The actual base class is TrackFitterBase. This implements a generic version of calculateResiduals() (residual = distance perpendicular to the helix).
TrackFitterSimpleChiSquare inherits from this and implements the fitting function.
TrackFitterSimpleChiSquarePads inherits the fitter function from TrackFitterSimpleChiSquare, but reimplements calculateResiduals() to return the residuals along the pad row.

Minuit needs limits for the track parameters to converge reliably. These are currently hard-coded:
  1. omega=[-1 .. 1] (only tracks with radius larger 1 mm)
  2. D0=[-100 .. 100] (impact parameter +- 100 mm)
  3. phi=[-2 pi .. 2 pi] (two full circles, no real limit)
  4. tanLambda=[-100 .. 100] (only tracks with less than 100 cm z variation on 1 cm on the xy projection)
  5. D0=[-1000 .. 1000] (only tracks with Z0 +- 1 m)

The only problematic value is D0, since this is a sort of "vertex constraint". In prototype geometry D0 can be large, depending on the orientation of the readout module in global coordinates.
+- 1 m in z is fine for the LP, but still hard coded is not good.
How about using +- 100 mm around the start value from the seed track, for both D0 and Z0?

The other values should be no real limitation, although it should be in the documentation in case someone has problems in exotic cases.



Martin Killenberg

 Topic: Alignment for TPC Modules
Alignment for TPC Modules [message #1946] Mon, 19 April 2010 04:06
Messages: 41
Registered: March 2009
Dear all,

we started a discussion on the topic of alignment in the MarlinTPC meeting of April 15, 2010.

My proposal is to have a baseline geometry description (on the basis of best knowledge at the beginning), encoded in the GEAR xml file.
Then a two-step procedure to
  1. determine the misalignment of the individual modules and store it as alignment constants in the conditions database
  2. apply the alignment during runtime in a second execution on hit level

I see several advantages. We directly have the versioning system of the conditions database, as well as the underlying idea of alignment represented in the right category -- as conditions data. We also stay with the already established reconstruction method in two steps.

Any thoughts, comments?

Please note: for the Hit level alignment, I would suggest an additional flag for the hit (see another forum thread for this).


When you have eliminated the impossible, whatever remains, however improbable, must be the truth. (Sir A.C. Doyle in Sign of Four)
 Topic: VTXNoiseHits
question.gif  VTXNoiseHits [message #1618] Wed, 05 November 2008 13:58
Messages: 6
Registered: January 2008
Does anybody ever used the processor VTXNoiseHits of MarlinReco? Does anybody know how to use that properly?

I am trying to simulate beam background using that processor but my job keeps crashing in PandoraPFA processor. I already changed the density per layer but the crash moves from one event to another.

#8 <signal handler called>
#9 0xf52d669a in PandoraPFAProcessor::CreatePFOCollection ()

If I switch off VTXNoiseHits the job runs smoothly.

 Topic: Default Analyses in MarlinTPC
Default Analyses in MarlinTPC [message #1459] Fri, 02 May 2008 05:59
Messages: 125
Registered: July 2005
Location: CERN
I would like to discuss which default analysis processors should be available for the large prototype (LP) TPC.

Currently available (MarlinTPC r1076):
  • BiasedResidualsProcessor
    Distribution of residuals wrt. track where all hits are included in track fit. Works for straight tracks and helices.
  • HitAndTrackChargeProcessor
    Histograms of charge per hit, charge per track and charge per track length.
  • LinearGeometricMeanResoutionProcessor
    Calculate residuals with the test hit included and excluded from the track fit. Implementation for straight lines using linear regression.
  • LinearThreePointResolutionProcessor
    Calculate residuals using the three-point method. Implementation for straight lines.
  • TimePixClusterSizeProcessor
    Size of clusters (number of pixels and cluster radius) on the TimePix chip
  • TimePixOccupancyProcessor
    Count how many times a pixel has been hit on the TimePix chip
  • TimePixTOTDistributionProcessor
    Distribution of TOT values of all pixels on the TimePix
  • XYZDistrubutionProcessor
    Distributions of reconstructed x, y, and z coordinate of all hits in an event
  • XYZDistrubutionProcessor
    Distributions of reconstructed x, y, and z coordinate of hits on tracks

Also needed:
  • ResidualsReferenceTrackProcessor
    Distribution of residuals wrt. reference track
  • PadOccupancyProcessor

The geometric mean processor should be extended to helix tracks (lower priority since this method is time consuming. The biased residuals are ok for long tracks with many hits and can be corrected by sqrt( nHits / (nHits - nDoF) ) ).

What else is missing?

Martin Killenberg

 Topic: Evolving the Track Class
Evolving the Track Class [message #1092] Tue, 11 September 2007 13:03
Messages: 16
Registered: June 2007
Location: Fermiliab
Dear Colleagues,

I would like to start a discussion about ways in which we can improve the LCIO Track class. The discussion might also touch related classes such as hits and vertices. I have created a wiki page to launch the discussion and I invite all of you to contribute. The page is on the SLAC Confluence wiki at: +LCIO+Track+Class

I prefer comments within the wiki but, if I am in the minority, we can continue the discussion within this forum. I will monitor both the wiki and this forum.

The instructions for getting an account on the wiki are at linked from:


Rob Kutschke

Rob Kutschke Computing Division, MS#234 Fermi National Accelerator Laboratory
Phone: (630) 840-5645 P.O. Box 500
Fax: (630) 840-2783 Batavia, IL 60510
 Topic: New Home for MarlinTPC
New Home for MarlinTPC [message #797] Fri, 20 April 2007 01:37
Messages: 125
Registered: July 2005
Location: CERN

MarlinTPC has a new home. A subversion server has been set up on
The cvs repository in Zeuthen has been taken off line.

A short introduction how to get MarlinTPC and use the subversion repository can be found on

To document MarlinTPC a wiki has been set up:
It provides a user workbook with information how to compile and use MarlinTPC, and a developer workbook with details about processor programming, usage of the version controll system etc.

If you want to be informed about new versions of MarlinTPC you can subscribe to the MarlinTPC mailing list:

If you want to become a registered developer with write access to the subversion repository, please contact me.



Martin Killenberg

 Topic: TrackCheater Covariance Matrices
TrackCheater Covariance Matrices [message #659] Mon, 22 January 2007 10:50
Messages: 5
Registered: June 2006
Location: Oxford

I was hoping to use the TrackCheater processor in MarlinReco but it does not currently provide covariance matrices for the tracks. I see that the helix fitter used returns the diagonal terms, are these sutible for use?
 Topic: INSTALL of MarlinTPC
INSTALL of MarlinTPC [message #549] Fri, 07 July 2006 03:30
Messages: 9
Registered: February 2005
Location: DESY Hamburg
Hi all,

for compiling the Processor of Marlin TPC it is recomended to set:
export LCIO="Where ever it is"/lcio/v01-07
export PATH=$LCIO/tools:$LCIO/bin:$PATH

export MARLIN="Where ever it is"/marlin/v00-09-04

export TPCCondData="where ever it is"/tpcconddata

the submitted exaples work without LCCD and GEAR but you should set:
export LCCD="Where ever is is"/lccd/v00-03
export GEAR="Where ever it is"/gear/v00-03beta
export PATH=$GEAR/tools:$GEAR/bin:$PATH

if you what to use the CondDBDatabase you must set:
export CondDBMySSQL="Where ever it is"/CondDBMySQL

export MYSQL_PATH="Where ever it is"/mysql

I Hope this will help you.


Matthias Enno Janssen
Notkestr. 85
22607 Hamburg

Bldg. 1d / Room 28
Phone: +49/(0)40/8998 3137
Fax: +49/(0)40/8998 1812

 Topic: LCIO classes for Timepix readouts
LCIO classes for Timepix readouts [message #525] Wed, 21 June 2006 11:03
Messages: 23
Registered: June 2006
Location: University of Bonn

I have a couple of questions concerning the usage of the LCIO tracker classes for Timepix based readouts.

The Timepix chip can be run in the following modes
(the mode can be chosen on a pixel by pixel basis):

1. Time-over-threshold (TOT) mode:
Pixel provides total time (in clock counts) over threshold for all hits in a readout cycle.

2. Medipix mode:
Pixel provides number of hits over threshold during readout cycle

3. Timepix mode:
Pixel provides time (in clock counts) between first hit and end of the readout cycle.

4. Forth mode:
Probably not needed for TPC applications.

How shall we store these data in LCIO?

TrackerData (containing the raw data for this particular type of readout technology (and for TDC based readouts) -> see previous discussion):

Mode 1: How shall we store the time information? The time variable expects a drift time, not a TOT (which is more charge information than time information). Do not use the time variable and use the first entry of the getChargeValues() vector to save the TOT - or is there a better solution? What convention shall be use to indicate that the time variable is not used? Set it to a value < 0?

Mode 2: How shall we store the number of hits (n) over threshold?
Fill n TrackerData objects for one channel in the corresponding collection? What charge should they contain in this case?

Mode 3: Here the time provided by the pixels is T_cycle - t_drift
with T_cycle = duration of readout cycle and t_drift = drift time. Storing this information in the time variable (expected to contain a drift time) would be misleading.


Here we should use some of the quality bits to indicate the operation mode of the pixel. Should we use official bits for that or some of those reserved for private usage?
From these bits we can then decide whether the time and charge variables contain useful information (depending on the operation mode).


Here again we should use some of the quality bits to indicate that there might be hits with incomplete information.

Any suggestions, comments, ...?

Cheers, Peter
 Topic: Vertex Class for LCIO?
Vertex Class for LCIO? [message #474] Tue, 16 May 2006 09:54
Messages: 80
Registered: January 2004
Dear Colleagues,
Please see this thread in the LCIO forum for a discussion on whether a new Vertex class is needed for LCIO.
Norman Graf
Forum: Analysis Tools
 Topic: problems installing v17-06 of ilcsoft
problems installing v17-06 of ilcsoft [message #2363] Thu, 28 May 2015 06:57
Messages: 6
Registered: May 2015
Location: University of Sussex
Hi all,

I'm trying to install v17-06 of ilcsoft using the automatic installation:

./ilcsoft-install -i releases/v01-17-06/release-scratch.cfg

Everything seems to go OK until it is compiling the FastTrack package, and there I have the error below.

Did anyone have the same problem? Any help with this would be really appreciated!




-- for some reason in the log it does not specify which variable/pointer in not declared.. Sad

mv -f .deps/libfastjet_la-PseudoJet.Tpo .deps/libfastjet_la-PseudoJet.Plo
mv -f .deps/libfastjet_la-ClusterSequence_TiledN2.Tpo .deps/libfastjet_la-ClusterSequence_TiledN2.Plo
mv -f .deps/libfastjet_la-ClusterSequence_CP2DChan.Tpo .deps/libfastjet_la-ClusterSequence_CP2DChan.Plo
mv -f .deps/libfastjet_la-ClusterSequence.Tpo .deps/libfastjet_la-ClusterSequence.Plo
mv -f .deps/libfastjet_la-ClosestPair2D.Tpo .deps/libfastjet_la-ClosestPair2D.Plo
make[1]: Leaving directory `/research/epp/ilc/software/ilcsoft/v01-17-06/FastJet/2.4.2/ build/src'
make: *** [all-recursive] Error 1
Making install in src
make[1]: Entering directory `/research/epp/ilc/software/ilcsoft/v01-17-06/FastJet/2.4.2/ build/src'
/bin/sh ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I../../FastJet/src -I../include/fastjet -O3 -Wall -g -DDROP_CGAL -I../../FastJet/src/../include -MT libfastjet_la-ClusterSequence_N2.lo -MD -MP -MF .deps/libfastjet_la-ClusterSequence_N2.Tpo -c -o libfastjet_la-Cluster$
libtool: compile: g++ -DHAVE_CONFIG_H -I. -I../../FastJet/src -I../include/fastjet -O3 -Wall -g -DDROP_CGAL -I../../FastJet/src/../include -MT libfastjet_la-ClusterSequence_N2.lo -MD -MP -MF .deps/libfastjet_la-ClusterSequence_N2.Tpo -c ../../FastJet/src/ -fPIC -DPIC -o .li$
In file included from ../../FastJet/src/
../../FastJet/src/../include/fastjet/internal/ClusterSequenc e_N2.icc: In instantiation of â:
../../FastJet/src/ required from here
../../FastJet/src/../include/fastjet/internal/ClusterSequenc e_N2.icc:109:39: error: â was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
if (jetA < jetB) {swap(jetA,jetB);}
In file included from /cvmfs/ 81_x86_64_slc6/slc6/x86_64-slc6-gcc48-opt/include/c++/4.8.1/ set:62:0,
from ../../FastJet/src/../include/fastjet/ClusterSequence.hh:57,
from ../../FastJet/src/../include/fastjet/internal/ClusterSequenc e_N2.icc:3,
from ../../FastJet/src/
/cvmfs/ 81_x86_64_slc6/slc6/x86_64-slc6-gcc48-opt/include/c++/4.8.1/ bits/stl_multiset.h:789:5: note: â c>$
swap(multiset<_Key, _Compare, _Alloc>& __x,
In file included from ../../FastJet/src/
../../FastJet/src/../include/fastjet/internal/ClusterSequenc e_N2.icc: In instantiation of â:
../../FastJet/src/ required from here
../../FastJet/src/../include/fastjet/internal/ClusterSequenc e_N2.icc:109:39: error: â was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
if (jetA < jetB) {swap(jetA,jetB);}
In file included from /cvmfs/ 81_x86_64_slc6/slc6/x86_64-slc6-gcc48-opt/include/c++/4.8.1/ set:62:0,
from ../../FastJet/src/../include/fastjet/ClusterSequence.hh:57,
from ../../FastJet/src/../include/fastjet/internal/ClusterSequenc e_N2.icc:3,
from ../../FastJet/src/
/cvmfs/ 81_x86_64_slc6/slc6/x86_64-slc6-gcc48-opt/include/c++/4.8.1/ bits/stl_multiset.h:789:5: note: â c>$
swap(multiset<_Key, _Compare, _Alloc>& __x,
make[1]: *** [libfastjet_la-ClusterSequence_N2.lo] Error 1
make[1]: Leaving directory `/research/epp/ilc/software/ilcsoft/v01-17-06/FastJet/2.4.2/ build/src'
make: *** [install-recursive] Error 1
 Topic: Check if an MC particle's vertex is in calorimeters
Check if an MC particle's vertex is in calorimeters [message #2077] Wed, 29 September 2010 08:25
Messages: 14
Registered: November 2007
Dear all,
is there an easy way to check if an MC particle was born inside one of the calorimeters, i.e. ECAL? I've seen that there is a method from:

GearMgr -> getPointProperties() -> isCalorimeter()

but this does not work out of the box. My first attempt was something like this:

  gear::GearXML gearXML("myfile.xml");
  gear::GearMgr *gearMgr = gearXML.createGearMgr();

but this method is not implemented.

I've also tried by using the GEAR calorimeter parameters from the xml file, but this proves to be not trivial, it still implies hard coded numbers to create an ECAL volume.

Maybe I'm missing something, but I could not find an obvious way to do this. If anyone has a better idea, please let me know.

Thank you in advance for your help,
 Topic: Concerning TrackerHits vs. SimTrackerHits
Concerning TrackerHits vs. SimTrackerHits [message #1838] Sat, 15 August 2009 12:42
Messages: 3
Registered: December 2008
I am trying to exclude hits found on tracks in one particular track finding
algorithm (SIDTracker) from the list of hits used by another track
finding algorithm (AxialBarrelTracker or Garfield). The first algorithm
uses TrackerHits, while the next algorithm uses
SimTrackerHits. Is there a one-to-one correspondence between
TrackerHits and SimTrackerHits, with pointers connecting the two lists,
so that I can eliminate TrackerHits on SIDTracks from the list of
SimTrackerHits used in the other algorithms? Alternatively, I could look
for and eliminate SimTrackerHits in proximity to TrackerHits found on
SIDTracker tracks, but that seems less appealing.
 Topic: JAS3 support of HBOOK and ROOT files
icon8.gif  JAS3 support of HBOOK and ROOT files [message #1323] Wed, 05 December 2007 09:41
Messages: 10
Registered: March 2004
Location: HEPHY Vienna, Austria, EU
Referring to the current "JAS3" version 0.8.3 build 1686 (Feb 1, 2007), and to the older 0.8.3 build 1643 (April 5, 2006), problems occur when opening an HBOOK or ROOT file (the latter created from that HBOOK file by "h2root" version 3.10 or 5.14, respectively, and with default or no compression, respectively). The platforms used are Debian-Linux (current JAS3 version) and MacOS X 10.4 (both versions).

Older JAS3 version 0.8.3 build 1643 (April 5, 2006):

HBOOK file opening = ok
ROOT 3.10 (default) = ok,
ROOT 5.14 (default) = see (2),
ROOT 5.14 (no compr.) = see (2);

Current JAS3 version 0.8.3 build 1686 (Febr. 1, 2007):

HBOOK file opening = see (1),
ROOT 3.10 (default) = ok,
ROOT 5.14 (default) = see (2),
ROOT 5.14 (no compr.) = see (2).


(1) "Error during command processing"
java.lang.RuntimeException: Error loading native library: freehep-hbook-2.0
at ...
Caused by:
java.lang.UnsatisfiedLinkError: no freehep-hbook-2.0 in java.library.path
at java.lang.ClassLoader.loadLibrary( ...

(2) "Error opening file"
Exception: Error during decompression (size=5597/18939)
at 481) ...
Caused by: invalid stored block lengths
at Method)
at ...

In summary:

(1) the current JAS3 version cannot open HBOOK files, whereas the older version can do it;
(2) both JAS3 versions can open ROOT 3.10 files, but cannot open ROOT 5.14 files.

Winfried Mitaroff
(HEPHY Vienna)
Forum: Reconstruction
 Topic: 250 GeV DBD samples
question.gif  250 GeV DBD samples [message #2349] Sat, 20 December 2014 02:51
Messages: 22
Registered: June 2004
Location: LAL Orsay
Dear Colleagues,

we would like to look at b quark production in the 250 GeV samples as produced for the DBD, e.g.

lfc-ls /grid/ilc/prod/ilc/mc-dbd/ild/rec/250-TDR_ws/2f_Z_hadronic/I LD_o1_v05/v01-16-p10_250

We are wondering whether these files were produced with or without gamma gamma background. I assume that they are with the background as there is nowhere a corresponding label. Does there exist also files w/o gamma gamma background? From a quick scan through the grid directories, I could not figure out corresponding files but maybe I have missed something.

We (or better said Sviatoslav) are capable of running the reconstruction ourselves but would of course prefer centrally produced files.

Thanks in advance for advice and help and Merry Christmas,

 Topic: RecoMCTruthLinker
RecoMCTruthLinker [message #2222] Fri, 09 March 2012 07:42
Messages: 13
Registered: June 2008
Location: LYON

I don't know what the RecoMCTruthLinker processor is doing in details.
In the standard config (ILDConfig v02-00), it is used and it requires a collection of LCRelation between the SimCalorimeterHit and the Calorimeterhit.

In my setup, I'm producing one LCRelation for HCAL and one for ECAL. So to feed the RecoMCTruthLinker properly, I need to merge these 2 collections in one.

Is there a processor that can do such a merge ?
 Topic: Matching
Matching [message #1814] Wed, 22 July 2009 06:54
Messages: 16
Registered: September 2008
Location: University of Rochester
Hi, I tried to create a matching program between virtual hits and real hits in order to get the tracks. I did this by trying to match the cartesian coordinates, but it doesn't seem to produce the right results. It only matches a few hits every few events. Is there anything wrong with the code?


double [] VirtPos = new double [3];

VirtPos[0] = rpVect[0];

VirtPos[1] = rpVect[1];

VirtPos[2] = rpVect[2];


List<CalorimeterHit> hits = null;
try {
hits = event.get( CalorimeterHit.class, hcalHitmapName);
catch (Exception e) {}
if(hits==null) return;

for( int i = 0; i<hits.size(); ++i ) {
CalorimeterHit ihit = hits.get(i);

else if(subdetName.equals(ecalSubdetName))
List<CalorimeterHit> hitsE1 = null;
try {
hitsE1 = event.get( CalorimeterHit.class, ecalHitmapName);
catch (Exception e) {}
if(hitsE1==null) return;

for( int i = 0; i<hitsE1.size(); ++i ) {
CalorimeterHit ihit = hitsE1.get(i);

protected void matchHitsXYZ(double [] realPos)

AIDA aida = AIDA.defaultInstance();
int nhitsTotal = 0;

// loop through virtual hits
for (double [] virtPos : VirtPosList) {

if (((realPos[0])-1)<=virtPos[0] && virtPos[0]<=((realPos[0])+1))
if(((realPos[1])-1)<=virtPos[1] && virtPos[1]<=((realPos[1])+1))
if( ((realPos[2])-1)<=virtPos[2] && virtPos[2]<=((realPos[2])+1))
System.out.println("Matched Hit at: (" + realPos[0] +","+ realPos[1] +"," +realPos[2] +")");

aida.cloud2D("Y vs. X Matched").fill(realPos[1], realPos[0]);
foundHits.add(new BasicHep3Vector(realPos[0], realPos[1], realPos[2]));

System.out.println("Number of virtual hits : " + VirtPosList.size() + " Number of real hits : " +nhitsTotal);
} }

 Topic: A naive question about the output of marlin.
A naive question about the output of marlin. [message #1666] Wed, 17 December 2008 23:35
Messages: 15
Registered: September 2008
Location: IU
Hi all,

Sorry for bothering you with such a naive question, but which part of the output of marlin are the reconstructed particles?

I saw a several reconstructed particles classified by jets, but couldn't see an overall list.

I am using the ilcsoft-1.5.02 release, with stdreco.xml and a mumuh event file.

The output is dumpped with dumpevent. (Are there any better way to examine the outputs?)

 Topic: reconstruction particles
reconstruction particles [message #1502] Thu, 12 June 2008 04:37
Messages: 6
Registered: April 2008
Location: Institute of Nuclear Phys...
Hello everybody,
I am a student at Institute of Nuclear Physics in Cracow and I am trying to get some information from my collections. I simulated e+e-->ttbar beam at 500GeV in CMS (pythia generator + mokka).Now I`m studying Marlin and I need some help about this software. I used a "example_steering_file_LDC01_05Sc.xml" from $ILCSoft/PandoraPFA folder. I obtain "output_file.slcio" which has a lot of collections.
My questions are the following:

1. I need information about momentum reconstructed particles. I know that this info include "MCParticle" collection(getMomentum()), but there is info about simulated particles, which caused hit. Where can I obtain info about momentum reconstructed particles in order to histogramming and compare with simulated?

2. I understand that all "Sim.." collections are simulated from Mokka and using to other processors as primary input collections to further reconstruction. There is info about position (getPostion())- Does this function return "x,y,z" or "r,phi,z" in polar coordinate system?

3. Can I get info about momentum particles from "TrackerHit" and "Track" collections? Do exist any method to compute it from this collection?

4. In "ReconstructedParticle" collection I find a getMomentum() function, but there is only few low energy particles(pi+,pi-,neutrons,photons) - Is it correct?? Does collection "ReconstructedParticle" is the only, which contain reconstructed particles from name? i.e. Does Pandora is such intelligent that can reconstructed W+W- bosons from ttbar and tell me: there are W bosons Smile??

5. How can I obtain total energy distribution reconstructed particles? Which collections include such information?

The last most lamers question: What kind of information I can get from reconstructed collection (what is the most ineteresting in this type of simulations: e+e- -> ttbar)?

Thanks in advance

[Updated on: Thu, 12 June 2008 04:42]

 Topic: org.lcsim Tracking Examples
org.lcsim Tracking Examples [message #1170] Wed, 03 October 2007 12:35
Messages: 5
Registered: June 2006
Location: Oxford

Can anyone point me to examples of how to use the various tracking options in org.lcsim? For example which ones work well together and are relatively mature and how to swap in cheating etc?
 Topic: How to get calorimeter cell indices from a segmentation class?
How to get calorimeter cell indices from a segmentation class? [message #396] Mon, 05 December 2005 12:37
Messages: 47
Registered: May 2004
Location: DeKalb, IL, USA

I am trying to develop the neighbour finding code in the non-projective endcaps (GridXYZ segmentation class). For testing, I am using sidaug05_tcmt geometry with some muons in the endcaps, and using the NearestNeighborClusterDriver. As expected, the GridXYZ.getNeighbourIDs() method gets called. So far so good.

Inside this method, I need to know what are the cell IDs, ilay,ix,iy, so I have this piece of code:

public long[] getNeighbourIDs(int layerRange, int xRange, int yRange)
System.out.println("Nonproj neighbs: "+ layerRange+" "+xRange+" "+yRange);

int klay = this.getValue("layer"); // <== NullPointerException
int kx = this.getValue("x");
int ky = this.getValue("y");
int kz = this.getValue("z");
System.out.println("NeighborID: ref="+klay+" "+kx+" "+ky+" "+kz
+" (hex "+Long.toHexString(saveID)+")");

The line indicated produces a NullPointerException.
What's the right way of retrieving the cell indices from inside GridXYZ.getNeighbourIDs() method?

 Topic: DigiSim announcement
DigiSim announcement [message #323] Fri, 05 August 2005 13:15
Messages: 47
Registered: May 2004
Location: DeKalb, IL, USA
Dear colleagues,

We would like to announce the "release" of DigiSim for wider use within the org.lcsim framework. Please find the updated documentation posted at . There are usage instructions in the documentation.

We are currently trying to adapt the NearestNeighborCluster algorithm to do clustering using the digitized CalorimeterHits instead of using the SimCalorimeterHits. I will post another announcement when this example is ready.

Please note that an update of the C++ version will be made available later, hopefully in few weeks.

 Topic: Calorimeter Optimization for LC Detector Concepts
Calorimeter Optimization for LC Detector Concepts [message #274] Fri, 27 May 2005 14:57
Messages: 1
Registered: May 2005
I have looked at several HCAL options, comparing both absorber and active media types.

I first compared HCAL versions with identical scintillator readout, one version had 0.7 cm W absorber per layer and one had 2 cm SS as the absorber. In both versions, the HCAL was 4 nuclear interaction lengths thick, 55 layers of W/Scin compared to 34 layers SS/Scin. My specific finding was that the W/Scin HCAL performed better than the SS/Scin HCAL both for single particle energy resolution and for PFA results - both perfect PFA and the actual algorithm. Showers in W were more compact than showers in SS - confirming results I had seen from H. Videau earlier. My main motivation for this study was to see if a more compact HCAL could be built using a dense absorber, thus saving R**2 which presumably contributes to the cost of the magnet. My general conclusion was that not only was this goal achieved, but that the W/Scin HCAL even performed better (PFA and single particles) than the SS version.

Next I looked at 2 version of HCAL with W absorber, one with scintillator and one with RPC as active media. Both of these were analyzed as digital calorimeters. My specific findings here were that the W/Scin digital HCAL and the W/RPC digital HCAL had similar performance as determined by calculating perfect PFA. The scintillator version had slightly better PFA resolution, presumably because of the higher number of hits per GeV for neutrals which, in digital mode, translates directly into better resolution.

Despite this, the SiD detector concept chose as its HCAL 2 cm SS absorber with RPC readout. I then looked at the perfect PFA performance of this detector and found that it performed worse than both the W/Scintillator and the W/RPC HCALs. In fact, the SiD combines the absorber with worse properties with the active media with fewer hits, so it was no surprise that the perfect PFA performance was so poor. In fact, it is impossible to obtain 30%/sqrt(E) resolution for the SiD detector with this option.

I then made suggestions as to how the performance of the SS/RPC HCAL could be improved based on all of my observations and found that these improvements led to a larger volume for the HCAL. I then suggested that maybe the optimal use of RPCs (generally gas HCAL) would be found in a larger volume, lower B-field detector concept than the compact, higher B-field SiD. It seems to me, supported by the simulated detectors that I analyzed, that the optimal HCAL configuration for a compact, high B-field detector should have a dense absorber combined with a solid (or maybe liquid) active media. This optimizes (means minimizes) the outer radius of the HCAL which directly saves magnet costs as mentioned above while maintaining good resolution for the neutral component of jets. Of course, things like transverse segmentation and the minimum calorimeter radius affect the final PFA performance of the detector and are used to ultimately determine if a particular concept is viable - but, as I showed, the best perfect PFA performance I got for a compact, high B-field detector was with a W/Scin HCAL. It also seems to me that there might be a different optimal HCAL for the compact detector than for a large, low B-field detector. I wouldn't be surprised if it turned out that a W or SS/RPC HCAL would be a good choice for the LDC detector and that a W/Scintillator HCAL would be better for the SiD. I would recommend that the LDC consider a 0.5 cm W or 1 cm SS absorber/1.2 mm RPC per layer HCAL. A 4 lambda deep HCAL of this construction would have 77 layers of W/RPC or 67 layers of SS/RPC and would be ~100 cm or ~120 cm from IR to OR respectively. By thinning the absorber, I think the resulting neutral particle resolution obtained in a PFA would allow the 30%/sqrt(E) goal to be obtained.
 Topic: Nonprojective calorimeter support
Nonprojective calorimeter support [message #271] Thu, 26 May 2005 04:24
Messages: 47
Registered: May 2004
Location: DeKalb, IL, USA

I got some time to work on the non-projective geometry in org.lcsim + GeomConverter. I was wondering what is the current status and what needs to be done at this point. My goal is to be able to do PFA studies on org.lcsim framework. Your suggestions and comments are welcome.

 Topic: Condtions Data framework
Condtions Data framework [message #185] Mon, 31 January 2005 02:01
Messages: 233
Registered: January 2004
Location: DESY, Hamburg

I just posted a proposal for a condtions data framework to the Prototypes/Calorimetry forum. As this software might be interesting for a wider audience, e.g. TPC groups, I am posting a reference here as well: &rid=6#msg_184

Forum: LCIO
 Topic: SIOWriter couldn't write event header
SIOWriter couldn't write event header [message #2236] Wed, 01 August 2012 09:50
Messages: 13
Registered: June 2008
Location: LYON
Dear all,

Trying to run 2 processors in Marlin, the second one being LCIOOutputProcessor, I got a crash at the first event with the following message :

A runtime error occured - (uncaught exception):
lcio::IOException: [SIOWriter::writeEvent] couldn't write event header to stream: _scratch_RawHits_slcio0
Marlin will have to be terminated, sorry.

How could I debug such a thing ? Is there a way to tell SIO to be more informative ?
Is it a usual message if I've not set properly something ?

In case you want to try to reproduce it :
I'm using ilcsoft release v01-14,
I'm running processors from the Trivent package release 8 ( svn co Trivent )
Look into the README file for instructions on how to compile the package.
I'm using the Marlin xml file which is under <Trivent>/steer/streamout.xml where I've uncommented the LCIOOutputProcessor.
The input file I'm using is on the calice grid at /grid/calice/SDHCAL/TB/CERN/PS_April2012/RAW/DHCAL_713775_I0 _0.slcio

Thanks for any advices.
 Topic: const correctness of LCTime
const correctness of LCTime [message #2181] Fri, 22 July 2011 09:46
Messages: 18
Registered: July 2009
Location: Desy

Dear developers.
Would it be possible to declare const the get functions of the LCTime (and whatever other class IMHO) so that they can be used in a const function?
Pages (7): [ «    1  2  3  4  5  6  7    »]

Current Time: Tue Dec 10 15:42:40 Pacific Standard Time 2019
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.1.
Copyright ©2001-2010 FUDforum Bulletin Board Software