You can easily manage your logging sets through the logging set summary view. To get to this view, simply click on LOGGING: in the workspace. This view will list all of the current logging sets and their status. To add a new logging set, simply click Add. You will be prompted for a name for your logging set. Enter a name and click OK. You will then be switched to the logging set view for your new logging set. Once a logging set is created you can get to the logging set view for it by clicking on its name under LOGGING: in the workspace.
Once created, you can start or stop a logging set either by clicking on the Begin and End buttons in the logging set view, or by right clicking on the logging set name and selecting Begin Logging Set or End Logging Set. Logging sets can also be started and stopped in script.
Auto-Start: If checked, the logging set is started as soon as the document is loaded, unless it is loaded in safe mode. If checked, the logging set will also be started when switching out of safe mode.
Logging Method: Determines how the data is logged to disk. The choices are:
ASCII: This is the default format that logs to a delimited ASCII files. Delimited ASCII files can be read by most every windows data analysis package such as Excel and so is a very flexible format. It is also easily opened and edited in other software packages, even notepad. Because of this, the file format can easily be repaired if some sort of damage was to occur to part of the file. This is the recommended format for logging your data if you aren't going to log to a database.
ODBC Database: This format will log to an ODBC compliant database. How to set up an ODBC database is described in a separate section. ODBC is convenient if you ultimately want your data in a database, but it is very slow, especially when logging directly to Microsoft formats such as Excel and Access. You may see better performance, especially if you are writing to a native SQL database. Database logging is not supported in all versions of DAQFactory.
Binary: There are three different binary modes. Each uses a different binary representation for the data. If you are going to use your data in an application that can read a binary format we suggest this method over the other methods as it is very fast both to log and to load into your analysis software, and it is the most compact. The three different modes allow further fine tuning of the data set for most compactness and are discussed in a separate section.
File Name: The path and filename you would like to log data to. Click on the Browse button if you want to use the standard Window's file selection window to select a file name. If you specified auto split files, a number or date stamp will be appended to your name (in front of the file extension).
You can optionally use date/time format specifiers in your file names. This allows you to create new files every hour, day, week, etc. and include the time stamp in the file. To see what format specifiers are available, please see the section on the FormatDateTime() function, in 4.12.11. For example, if you wanted to create a new date stamped file every day, you might put something like: c:\myfolder\myFile%y%m%d
Data Source: This displays in place of file name when you select ODBC data base. This should be the name of the ODBC data source as defined in the ODBC manager.
Note: This is NOT the name of the database! This has been a very common mistake despite our documentation so here it is again: THIS IS NOT THE NAME OF THE DATABASE SO IT SHOULD NOT BE A PATH TO A FILE! It should be the name you assigned in the ODBC manager. See the section on setting up an ODBC connection for more details.
SQL: This becomes availble when you select ODBC database. Clicking this displays a window that allows you to adjust the SQL commands used. This is sometimes necessary as different database engines use different dialects of SQL.
Table Name: This becomes available when you select ODBC database. This is the name of the table within the database where the data will be written.
Channels Available / Channels To Log: These two tables determine what data will be logged. The channels available table lists all the channels in your channel table. Since logging sets do not support direct logging of variables or virtual channels, these will not appear in this list. Select the channels you would like to log either individually, or by holding the Shift or Ctrl key and selecting multiple channels, then move them to the channels to log table using the >> button. The All, None, Up, Down, and << buttons will help you get the exact list you want and in the proper order. In the channels to log table, there is an additional column called Figs:. This determines the number of significant figures that will be used when logging the data. This has no effect on binary formats, and may be limited in ODBC formats depending on your database.
Manual: The manual button allows you to add additional columns to your logging set. This requires you to manually add data to the logging set through a sequence. Using this method you can log any sort of data in a logging set. Please see the section on logging set functions for more info. Two built in manual columns that can be added without writing a sequence is Note and Alert which will log any quick notes or alerts in with your data stream. Since notes and alerts are both strings, you cannot log these to a logging set using the binary logging method.
Mode: There are two different modes of logging. These are independent of the logging method and determine how data is collected and aligned to rows.
All Data Points (aligned): This mode will log all data that comes in. The data is aligned to place data with the same or similar times on the same row. The spacing of the rows may be variable depending on when your data is acquired.
Align Threshold: How close in time (in seconds) two data points have to be to be placed in the same row.
Note: when taking and logging data faster than 20hz you'll need to set the Alignment Threshold parameter of your logging set to 0 to avoid missing data points in your logging set.
Align Mismatch: If a particular column does not align with the current row's time, DAQFactory can either leave the value blank (or 0 for binary modes), or copy the last value written.
Application: The All Data Points mode is useful when you want to make sure that every data point you acquire on the given channels is actually logged and no data massaging occurs before logging.
Fixed Interval: This mode writes a row of data at a constant interval. The data written is either the most recent value for the particular column, or the average of the values acquired since the last row was written.
Interval: Determines how often in seconds a row of data is written.
Type: Determines what data is written. If average is selected, then the average of the data points that have arrived since the last row was written is used. If snapshot, then the most recent value of each column is used.
Application: The fixed interval mode is useful when you are acquiring data at different rates but you want the final logged data to be evenly spaced in time.
Time Format: Internally, DAQFactory records time as seconds since 1970. This yields the best precision possible, and is the internal Windows standard, but is not the format used by Windows' Office products like Excel, nor is it easily read by humans. Windows' Office products use decimal days since 1900. As such, we give you a choice. You can log using DAQFactory Time, which is best if you are going to import the data into a data analysis package, or if you are doing high speed data. You can use Excel Time if you are going to take the data into Excel and are doing high speed data. Or, if you want the time column to be in what we like to call "human readable" format, you can select Custom, and specify your own time format specifiers in the following property.
Time Sig Figs: Determines how many significant figures are used to write time values. A value of 9 yields time precise to the second, 12 to the millisecond and 15 to the microsecond. The maximum value is 15. Does not apply to the Custom time format.
Custom Time Formatting: If Custom time format is selected, this parameter takes a date / time format specifier to determine the format of the output. This follows the FormatDateTime() function exactly. To see what format specifiers are available, please see the section on the FormatDateTime() function, in 4.12.11. Typically you can use %c which just uses whatever Windows thinks is the appropriate format (i.e. MM/DD/YY HH:MM:SS in the United States).
Log Unconverted Data: If checked, the data logged will not have any conversions applied to it.
Application: This is a preference thing. It is certainly more convenient to log converted data. It saves you from having to convert the data after the fact. The problem is that many conversions are irreversible especially if you forgot what conversion you applied. By saving unconverted data, you always have the data in the rawest form and therefore should know what units it is in.
Continue on Error: If checked and a file error occurs, the logging set will continue to retry logging data to that file. If not checked, then the logging set will stop if a file error occurs.
Delimited By: Determines the delimiter used to separate values in the ASCII log mode. Enter Tab to use a tab delimiter.
Include Headers: If checked then a single row of channel names properly delimited is written at the top of each file. This only applies for ASCII mode. For ODBC, the channel names are used for the field names.
Header File: If checked, a separate file, with the same file name as the logging file, but with a .head appended is written describing the logging set. This is especially useful for binary logging methods which are completely useless if you forget what column is what or even how many columns you have.
Auto Split Files: If checked, the logging files will be closed at a preset interval and a new file opened. A number is appended to each file name to properly organize all the files. Auto splitting files is most useful for long term acquisition to avoid large files and prevent data loss in the case of file corruption.
Data File Size: Used if auto split files is checked. Determines the number of rows written per file.
Use Date in File Name: Used if auto split files is checked. If this is checked, the date and time of file creation is used in determining the file number instead of simply numbering the files.
Application: The auto split files option is designed for users who will be taking data continuously for long periods of time. By enabling this option, DAQFactory will create multiple sequential data files. This will keep you from ending up with a giant 20 gig data file. Auto split is also designed to help alleviate possible data loss from corrupted files. Very often, when a file gets corrupted, all the data in that file is lost. If you split your data into separate files, file corruption may be limited to a small chunk of your data.