take in a list of track lists (trackll) and export it into row-wise (ImageJ Particle Tracker style) .csv files in the working directory
exportTrackll(trackll, cores = 1)
trackll | A list of track lists. |
---|---|
cores | Number of cores used for parallel computation. This can be the cores on a workstation, or on a cluster. Tip: each core will be assigned to read in a file when paralleled. |
.csv file output
The reason why ImageJ particle Tracker style .csv export was chosen is because it fully preserves track frame data, while maintaining short computation time and easy readability in Excel/etc.
In order to import this .csv export back into a trackll at any point (while preserving all information), select input = 3 in createTrackll.
If the track list does not have a fourth frame record column (not recommended), it will just output the start frame of each track instead and will take noticeably longer.
It is not recommended that exportTrackll be run on merged list of track lists (trackll). Also, ensure that the input trackll is a list of track lists and not just a track list.
The naming scheme for each export is as follows:
[Last five characters of the file name]_[yy-MM-dd]_[HH-mm-ss].csv
folder=system.file('extdata','SWR1',package='sojourner') trackll=createTrackll(folder=folder, input=3)#> #> Reading ParticleTracker file: SWR1_WT_140mW_image6.csv ... #> #> mage6 read and processed. #> #> Process complete.#Basic function call to exportTrackll with 2 cores into current directory exportTrackll(trackll)#> #> Writing .csv row-wise output in current directory for mage6 ... #> #> mage6_20-05-09_18-59-48.csv placed in current directory.#Get current working directory getwd()#> [1] "/Volumes/SDXC Disc/Google Drive/SingleMoleculeTracking/sojourner-master/sojourner/docs/reference"#> #> Reading ParticleTracker file: mage6_20-05-09_18-59-48.csv ... #> #> 59-48 read and processed. #> #> Process complete.