🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Help-Arbutus Analyzer Commands.txt

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Arbutus Analyzer Commands and Language Syntax Page 1 of 234 This section contains a complete description of every Analyzer command. A command description includes an explanation of a command’s action and it...

Arbutus Analyzer Commands and Language Syntax Page 1 of 234 This section contains a complete description of every Analyzer command. A command description includes an explanation of a command’s action and its parameters. Where necessary, examples are provided for illustrative purposes. Accept Activate Age Append Assign Benford Calculate Call Classify Close Comment Compare Connect Copy Count Cross Tabulate Define Column Define Field Define Relations Define Report / View Delete Dialog Directory Disconnect Display Do Do Report Dump Duplicates Else End Evaluate Export Extract Fieldshift Find Gaps Group Help If Import Index Join Let List Locate Loop Merge Notify Open Password Pause PDF Print Profile Protect Quit Random Refresh Relations Rename Report Results Retrieve Return Run Sample Save Search Seek Sequence Set with Preference Equivalents Set with No Preference Equivalents Size Sort SQL Statistics Stratify Summarize Table History Top Total Verify Accept The Accept command allows users to create interactive procedures that pause to display a dialog prompting a user for information. This command can be issued only in command mode. It cannot be used within a Group command. It is good practice to place all Accept commands at the beginning of a procedure. Use Accept to create procedures that prompt the user to provide information, such as sampling intervals, file names, department numbers or report headers and footers. Analyzer stores the user input in a character variable. When the user input is a numeric value needed for a subsequent Analyzer command, the command interprets the user input as a character value. You may need to convert the character data in the variable to numeric or date format before you can use it in other operations. To use the literal content of a variable in a subsequent command or operation, you can refer to the variable by name. To have Analyzer interpret the content of a variable as a field name or a file name, use the percent character (%) before and after the variable name. This converts the variable name into a macro and tells Analyzer to treat the content of the variable as a reference to a named field or file. For more information, see Variables, Interactive Procedures and Macro Substitution. To create more complex dialogs, use the Dialog command. For more information, see Dialog. Creating Dialog Boxes with the Accept Command When Analyzer encounters the Accept command, it temporarily suspends the procedure and displays a dialog prompting the user to enter information. Clicking [OK] in the dialog instructs Analyzer to accept the information and resume processing of the procedure. Clicking [Cancel] cancels execution of the procedure. There are two ways to request input from a user. You can have the user enter text in a text box or make a selection from a drop-down list. Drop-down lists can contain the names of fields, variables or Project items such as procedures, table layouts, indexes, Views and reports. To create a drop-down list, use the optional Fields parameter with one or more of the field-type identifiers. For more information, see Field Type Syntax. If a table is open when an Accept or Dialog command is issued, drop-down lists will contain only specified items from the currently open table. If no table is open when the command is issued, drop- down lists will be blank. Parameters The Accept command has the following command parameters: Fields Allows users to use drop-down lists instead of text boxes for user input. Drop-down lists can contain one or more field types. To Assigns the user response to a named character variable. Note: For a description of supported field modifiers see Field Modifiers. Command Mode Syntax Each separate Accept command generates a separate dialog. To create a dialog with multiple prompts, use a single Accept command with multiple prompt strings and variable names separated by commas. ACCEPT "prompt-string" TO variable-name prompt-string specifies the message that prompts the user for information during execution of the procedure. The prompt string must be surrounded by quotes and should be brief. variable-name specifies the name of the character variable that stores the user ’s response and does not require quotes. If the named variable already has a value, the dialog displays it. field types specify the field types to be displayed in the drop-down list. The field types must be surrounded by quotes. Field names and variable names can appear in the same drop-down list. For example, FIELDS "CNLD" generates a drop-down list containing the names of all fields and FIELDS "CNLDcnld" generates a drop-down list containing the names of all fields and all variables. Note: Do not mix "xixrxbxfxw" items with "CcNnLlDd" items. You should not have Project item types in the same list with field names or variable names. This may result in inappropriate user choices that cause processing errors. Field Type Syntax Generates a drop-down list Field type containing: C Character fields c Character variables N Numeric fields n Numeric variables L Logical fields (filters) l Logical variables D Date fields d Date variables xb Procedures xf Tables xi Indexes xr Views and reports xw Workspaces Single-Prompt Dialog Boxes To create a single-prompt dialog, use only one prompt string for each instance of the Accept command. Use the following syntax to create a procedure named Random_sample that generates separate prompts for a file name, a drop-down list of fields that can be sampled, a sampling interval and a random start value. Note: The % character shown in the OPEN command syntax below is used to refer to the contents of the named variable when the variable is used to store a file name or a field name. For more information see Macro Substitution. ACCEPT "Enter the filename to analyze" TO FILENAME OPEN %FILENAME% ACCEPT "Select the field to sample" FIELDS "N" to SAMFIELD ACCEPT "Enter the sampling interval" TO SAMINT ACCEPT "Enter the random start value" to RANSTART SAMPLE ON %SAMFIELD% INTERVAL SAMINT FIXED RANSTART RECORD TO INVSAMP When you run the procedure, the first dialog prompts you for the file name. The second dialog prompts you for the name of the field to be sampled. The syntax for this prompt, shown in the third command line above, uses the keyword: FIELDS 'N' As a result, this dialog contains a drop-down list of available numeric fields. The third dialog prompts you for the interval value. And the fourth dialog prompts you for the random start value. Analyzer displays the results of the procedure in the Command Log. Multiple-Prompt Dialog Boxes To create a multiple-prompt dialog, enter multiple sets of command parameters separated by commas for one instance of the Accept command, all on the same line, as follows: ACCEPT 'Select the field to sample' FIELDS 'N' TO SAMFIELD, 'Enter the sampling interval' TO SAMINT, 'Enter the random start value' TO RANSTART The Accept command above produces a dialog with one drop-down list of field names and two text boxes. Working with Commands > Commands > Activate Activate The Activate command makes workspace field definitions temporarily available to a currently open table layout. Once a workspace is activated, its field definitions are available for the current session only. For more information on creating and using workspaces, see Save and Workspaces. You can activate a workspace through the Overview or from the command line. For more information on activating workspaces from the Overview, see Overview. Parameters The Activate command has the following command parameters: OK Indicates that a field in the table with an identical name to one in the activated workspace is to be overwritten without confirmation. A field can not be replaced if it is referenced by a computed field. Default: Confirmation before replacing field. Secondary Indicates that the workspace is to be activated for the currently open secondary table. Workspace Indicates that a workspace is to be activated. Command Mode Syntax ACTIVATE workspace-name workspace-name specifies the name of the workspace you want to activate for your current Project. Examples The following example uses Activate to activate a workspace named Inventory_Workspace containing computed fields that assign the city names based on the location code in the currently open Inventory table and calculates the market value for items on hand in an inventory file. To activate the workspace, using the Activate command. 1. Open the desired table. 2. Enter the following on the command line in the Command Log or in a procedure: ACTIVATE WORKSPACE INVENTORY_WORKSPACE Working with Commands > Commands > Age Age Menu: Found in the ANALYZE menu Use the Age command to produce aged summaries of data. Examples include evaluating sales trends, looking at transaction volumes and classifying invoices by the number of days outstanding from a particular date. Unless you specify a cutoff date, aging is based on the current system date. You can specify starting points such as 0, 90, 120 and so on for each aging period or you can accept the default settings of 0, 31, 61, 91, 121, 151, 181 and 10,000 days. The number of records in each period is displayed in the Command Log in an Analyzer generated column entitled COUNT. You can also accumulate numeric field totals for each period. Parameters In addition to the command parameters described below, the Age command has the following command parameters: Append, First, Graph, GroupBy, If, Next, Noformat, Screen, To and While. For a description of these parameters, see Command Parameters. For a description of supported field modifiers see Field Modifiers. On Allows you to select the date field or expression to be aged. In the command dialog, select the aging field from “Aging field” drop-down list or click [Choose] to display the Selected Fields dialog. For more information on the Selected Fields dialog, see Selection/Edit Dialog Boxes. Accumulate Allows you to select the numeric fields to be totaled. In the command dialog, select fields from the “Fields to accumulate” list box or click [Choose] to display the Selected Fields dialog. Suppress The suppress parameter excludes agings falling outside the user specified aging cutoff points. By default, the Age command analyzes two additional cutoff ranges: the cutoff values below the minimum cutoff specified and the cutoff values above the maximum specified. In the command dialog, "Suppress items outside intervals" checkbox overrides this behavior. Intervals Allows you to specify the beginning of each aging period interval. The last entry represents the end of the final period. In the command dialog, enter the values one per line in the “Aging Periods” text box. Use 10,000 days to isolate items with clearly invalid dates. The default settings are 0, 31, 61, 91, 121, 151, 181 and 10,000 days. These defaults are set in the Options dialog. For information on changing these defaults, see Aging Periods. Cutoff Allows you to specify the date to which dates in the Aging field are compared. In the command dialog, use the built-in calendar in the Age dialog to select a date. In command mode, you must specify the cutoff date in YYYYMMDD character format, regardless of the format of the date field. The default cutoff date is the current system date. Statistics The Generate Statistics check box and the Statistics parameter allow you to output columns to the screen or a table containing the minimum, maximum, and average (mean) values of the accumulated numeric fields for each key date range. No statistics are generated if no numeric fields are accumulated. Command Output Enhanced Results By default, the Classify command outputs results to an Enhanced Results tab in the View. Tip: To choose Enhanced Results click the [More] button in the command dialog and choose the Enhanced Result choice from the pull-down list in the “Output Options” section. The Enhanced Results tab offers 3 sections: a tabular section listing a column for the date key field for the aging, the number of records counted for each key field value, and the percent of the total records that each classification represents. If one or more fields were accumulated, additional columns for percentage totals and the accumulated field totals are displayed for each field per classification. Each percent column points to the column it represents. The right-hand percent column points to the first accumulated field column and shows the percent of the total value that the value of this classification interval represents. Percentages are not provided for additional accumulated field columns. a visualization of the date key field counts a visualization of the accumulated field totals for each individual value from the chosen date key field The arrangement of the 3 sections in the Enhanced Results tab can be changed by clicking the section icon on the top left of the Enhanced Results tab. For each visualization, the type of visualization can be changed from either Bar, Column or Pie types by clicking the visualization pull down menu on the top right of each visualization. Command Log Output When Enhanced Results are generated, additional tabular results are included in the Command Log that also contain columns for the percentage of count and percentage of the totals (as fully described for Table Output). Table Output You can also write the command output to a table. Tip: Click the [More] button in the Age Command dialog and select the Table option from the pull- down list in the “Output Options” group box. Table output includes columns for Lower and Upper bounds (rather than simply an aging interval as seen in tabular output to the Command Log), the number of records in each interval and the percent of the total records that each interval represents. If one or more fields were accumulated, additional columns are displayed: the accumulated numeric field totals for each interval, for each numeric field accumulated a percent column of the total accumulated amount for each interval (this is only generated for the first numeric field selected for accumulation) Graph/Visualization Output You can also send the command output to a graph/visualization. Tip: To choose Visualization Output, click the [More] button in the command dialog and choose the Visualization choice from the pull-down list in the “Output Options” section. For more information on working with visualizations, see Graphs/Visualizations. Drill Down Results from the Age command can be clicked on to drill down on the supporting data on a record basis. For more information on drill down capabilities in tables or tabular results in the Command Log, see Drilling Down into Tabular Data. For more information on drill down capabilities in visualizartions, see Graphs/Visualizations Command Mode Syntax AGE date-field-name Working with Commands > Commands > Append Append Menu: Found in the DATA menu Use the Append command when you want to combine records and fields (data and computed fields) from multiple tables with either an identical or similar structure into a single table. Note: If you have made changes to your currently open table layout, be sure to save the table layout (or close it and save changes) before using it in an Append command. For example, the Append command is a good choice for combining fields from monthly tables into a yearly table. Note: A single APPEND command can replace multiple executions of the EXTRACT command with the APPEND option. For more information see Extract. There is no option to filter the data records in the tables being appended. Even if the records in all source tables are sorted, the output table is considered unsorted because the source records are appended in order of the table selection, without regard to the individual sort orders in each of the source tables. If you include all fields from all source tables when appending, the record length in the output table may become longer than the longest record in the source tables due to auto harmonizing the field types and lengths. Selecting Tables To Append To select tables to be appended, you have several choices: 1. Double click on each table to move the table from the Available Table List to the Selected Table List 2. Use Shift-Click to choose a set of contiguous tables from the Available Table List and then click the desired arrow to move the chosen tables to the Selected Table List 3. Use CTRL-click to choose a set non-contiguous tables from the Available Table List and then click the desired arrow to move the chosen tables to the Selected Table List Note: Reverse any of these processes to remove tables from the Selected Table List. Field Output The Append command only outputs data fields. Any computed fields have their computed values determined for each record and then the computed value for each record is outputted as a data field. Automatic Field Harmonization In certain situations, the Append command will automatically harmonize fields in order to ensure that the fields are appended in a consistent tabular format: Character Fields Different character field lengths are harmonized to the longest appropriate length All character fields are uniformly converted to either ASCII or UTF-16 (Unicode) field types as appropriate (i.e. if any Unicode fields are encountered then the output will always be UTF-16 character field type) Numeric Fields All numeric fields are converted to the Arbutus field type of length 12 regardless of source numeric field type Different numbers of decimal places encountered are harmonized to the largest appropriate number of decimal places Datetime Fields Different date, datetime, or time formats in the source fields are harmonized by converting the fields to the default Arbutus date, datetime or time formats: YYYYMMDD YYYYMMDD hh:mm:ss hh:mm:ss Limits to Automatic Harmonizing Analyzer does not automatically harmonize fields in the following situations: If fields with identical names have different data categories (character vs numeric vs datetime vs logical) if datetime fields with identical names have different datetime types (date vs datetime vs time) In these situations, an error message appears and the append operation is not executed. Note: See the Parameters section below (specifically the optional ASCHAR and ALLCHAR command parameters) to learn about options for harmonizing disparate field types in fields between tables. Parameters The Append command has the following optional command parameters: Common Fields Only [COMMONFIELDS] - indicates that only common data fields (having identical field names) are to be appended. The order of the fields in each table does not have to be the same but the core field types have to be the same (Character, Numeric, DateTime or Logical). Harmonize Dissimilar Fields as Character [ASCHAR] - indicates that fields with identical names that contain dissimilar content are to be harmonized as character fields Convert All Fields to Character [ALLCHAR] - indicates that all fields are to be converted to character fields Command Mode Syntax APPEND table_1 table_2 TO "combined_table" OPEN two or more Analyzer table-layout names are specified to indicate the data files to be appended the combined_table name is the name of the appended table to be created COMMONFIELDS - indicates that only common data fields (having the same field names and having the same core field types [Character, Numeric, DateTime or Logical]) are to be appended. The order of the fields in each table does not have to be the same. The physical order of common fields in the appended table is based on the physical order of common fields in the first table selected The alternate column titles for the common field in the appended table is based on the alternate column titles of the common tables in the first table selected ASCHAR - indicates that fields with identical names that contain dissimilar content are to be harmonized as character fields ALLCHAR - indicates that all fields are to be converted to character fields Examples To append two monthly receivable tables (ar_november, ar_december): APPEND ar_november ar_december COMMONFIELDS ASCHAR ALLCHAR TO "ar_nov_dec" OPEN Working with Commands > Commands > Assign Assign You can use the Assign command to store a value as a variable. You can also attach a condition to the Assign command. Only one variable assignment per Assign command. To perform multiple variable assignments in a single command see Let. Assign is typically used in a procedure in which you want to create or change the value of a variable. If Assign is issued without a condition, the assignment automatically takes place. Assign can be used as a separate command or within a Group command. An Assign command used within a Group command performs the variable assignment each time the command is encountered, if the test evaluates true for that record at least once per record. If you assign a value to an existing variable name that is used within a computed field, or the assignment is made inside of a Group command, then the value assigned is adjusted to the specifications previously defined for that variable (if possible); the length is padded or truncated and the decimals are adjusted if required. A standard "adjusting" message used for variable assignments is then displayed in the Command Log. If you assign a value to a variable name that already exists and the variable is not used in a computed field, or is assigned outside of a Group command, then the previous value and its specifications are overwritten. For more information on User-Created Variables see User-Created Variables. Analyzer recognizes several special variables such as HEADER and FOOTER. If character values are assigned to either of these two variables, they are automatically used as page headers or footers if a header or footer is not otherwise specified. This automatic page formatting feature remains in effect until the variables are deleted or until the end of the current session. Be aware of these variables when using the Assign command. Note: You can also create new and edit existing variables using the Expression Builder. Analyzer also supports using variables to store an array of values, in either numeric, character, datetime or logical formats. A variable array must use a consistent data type for all array elements; mixing of numeric, character, datetime or logical values is not permitted within the same variable array. Elements in the variable array must have the same format/characteristics (each numeric value must have the same number of decimals and each character string the same (maximum) string length). Note: The Assign command allows a maximum array size of 1,000,000 elements. The array value is set by the first assignment to the variable. For example: x=0.00 y=blanks(100) would create a variable array called x as an array of 100 numbers with 2 decimals and would create a variable array y as an array of 100 strings with a maximum length of 100 characters. Tip: Processing variable arrays is always most efficient when the maximum array size is established up front. The easiest way to achieve this is to initially assign a default format and data type to the largest element number in the variable array (as shown in the previous example). If subsequent variable assignments to array elements are made that don't match the previous definition then the result is forced to match (if possible) and the standard "adjusting" message used for variable assignments within the Group command is displayed in the Command Log. For more information on Variables Arrays see Variable Arrays. Parameters The Assign command has the following parameter: If. For a description, see Command Parameters. Command Mode Syntax variable-name = expression or ASSIGN variable-name:literal-string or variable-name = expression variable-name = expression is interpreted automatically as an assignment. variable-name specifies the name of the variable to be created. All field naming conventions apply. See Naming Fields for details. expression specifies the value to be assigned to the variable. All Analyzer expression conventions apply. [n] is an optional parameter for referencing elements in a variable array The ASSIGN keyword is optional in the first example of the command syntax, because the “=” sign is the assignment operator in Analyzer. The ASSIGN keyword is mandatory in the second example of the command syntax using the colon “:”. Use of the colon “:” assigns the literal string in its entirety to the variable regardless of quotes. This is useful in an expression where there may be a mix of single and double quotes used (such as is commonly used in a SQL Select Statement). Note: When using the syntax with the colon, the ASSIGN command cannot be conditionalized as the remainder of the line after the literal string is not parsed, and therefore the IF parameter would not be processed. Note: When using the syntax with the colon, the assignment cannot include any trailing blanks as these will be removed before the ASSIGN command is parsed. Examples In the first example, a user created variable called MPRODCLS is assigned the product class of the current record by entering the command: ASSIGN MPRODCLS = PRODCLS Because MPRODCLS is a variable, its value will not change unless explicitly changed by another Assign command. In the second example, a variable called SAMPLE_QTY will be assigned a value of 1 in a procedure whenever the variable COUNT has a value that is less than 10: ASSIGN SAMPLE_QTY = 1 IF COUNT < 10 If COUNT is greater than or equal to 10, no action is taken by Analyzer. This means that the value of SAMPLE_QTY remains unchanged. If the variable did not previously exist, it will not be created. In the third example, a variable called IF_VALUE is assigned a complex condition for use in a subsequent SQL Select Statement using Analyzer's SQL command: ASSIGN IF_VALUE:"COUNTRY='CANADA'" In the preceding example , the entire literal string (including all single and double quotes) is stored in the IF_VALUE variable. In the final example, a variable array called AMOUNT is assigned one hundred elements with a numeric value of 0.00 and then the first element of the variable array is assigned a value of 123.45 ASSIGN AMOUNT=0.00 ASSIGN AMOUNT=123.45 Tip: The keyword ASSIGN can be omitted and the examples above will generate the same result. Working with Commands > Commands > Benford Benford Menu: Found in the ANALYZE menu The Benford command allows you to generate digital analysis using the Benford formula. This command counts the number of times each leading digit or digit combination occurs in a data set and compares the actual count to the expected count. The expected count is calculated using the Benford formula. The command output can be sent to a graph. To help you evaluate the significance of deviations between actual and expected counts, the command output includes the Z-statistic for each count. You can also use the Bounds option to help you identify digit frequencies that are significantly outside expectations. When more than one count column falls outside the bounds, the data represented by these columns may be anomalous. For more information about digital analysis, see Digital Analysis Using Benford ’s Law: Tests & Statistics for Auditors by Mark J. Nigrini, Ph.D., published by Global Audit Publications. Note: Digital analysis tools like the Benford command enable auditors and other data analysts to focus on possible anomalies in large data sets. They do not prove that error or fraud exist, but identify items that deserve further study on statistical grounds. Digital analysis complements existing analytical tools and techniques and should not be used in isolation from them. Parameters In addition to the command parameters described below, the Benford command has the following command parameters: Append, First, Graph, If, Next, Noformat, Open, Screen, To and While. For a description of these parameters, see Command Parameters. For a description of supported field modifiers see Field Modifiers. On Allows you to specify a numeric field as the Benford key field. In the command dialog, select the field from the “Benford key field” pull-down list or click [Choose] to display the Selected Fields dialog. You can then select a field from the Available Fields list box or use an expression to create the values to be analyzed. For more information on the Selected Fields dialog, see Selection/Edit Dialog Boxes. Bounds Allows you to include upper and lower bounds in your output. In the command dialog check the “Include upper and lower bounds” check box. Leading Allows you to specify the number of leading digits to be analyzed. Enter a value from 1 to 6. In the command dialog, enter the value in the “Number of leading digits” text box. For example, if a numeric field called Amount contains the value $-6,234.56, the value analyzed is: 623 if Leading Digits = 3 62345 if Leading Digits = 5 If a numeric field called Amount contains the value $3.55, the value analyzed is: 355 if Leading Digits = 3 35500 if Leading Digits = 5 Input This command analyzes numeric fields. You can specify up to six leading digits to be analyzed. When there are more than three leading digits, Benford analysis must be sent to a file. Analysis of five or more leading digits takes time and Analyzer notifies you in the Command Log that it is working. Use the Esc key to terminate the command at any time before results appear in the Command Log. Effective Benford analysis requires large sample sizes. Analyzer warns you when a sample size may be too small for the specified number of digits. Anomalous data stands out better when you analyze positive and negative values separately. You can use a filter to separate the two before beginning your analysis. Digits or digit combinations with zero values are automatically excluded. The Benford command ignores records with values of zero, but reports the number of zero value records bypassed. It also ignores leading zeros, numeric formatting such as decimals and dollar signs and other non-numeric digits. If the resulting number of digits is less than specified, Analyzer adds zeros to the right of the result. Command Output Command Log Output By default, the command outputs tabular results to the screen (Command Log). Tip: Click the [More] button and select Screen option from the pull-down list in the “Output Options” group box. The Command Log output has columns for the strata intervals, the number of records in each interval and the percent of the total records that each interval represents. If one or more fields were accumulated, additional columns are displayed: another percent column and the accumulated field totals for each interval. Each percent column points to the column it represents. The right-hand percent column points to the nearest accumulated field column and shows the percent of the total value that the value of this interval represents. Percentages are not provided for additional accumulated field columns. The default output is to the Command Log in tabular format. You can also send output to a file or to a line graph or 2-D bar graph. Table Output You can also write the command output to a table. Tip: Click the [More] button and select Table option from the pull-down list in the “Output Options” group box. For both tabular results in the Command Log and for table output, the result contains four columns: Leading Digits, Actual Count, Expected Count, and Z-Stat Ratio If you select the bounds option, the table contains two additional columns, Lower Bound and Upper Bound. No output is produced for records that fail to meet test criteria or for records that contain only zeros. The Leading Digits column shows the leading digits tested. For example, if you specify one leading digit, Leading Digits displays the numbers 1 to 9. If you specify two leading digits, Leading Digits displays the numbers 10 to 99. The Actual Count column shows the count of each leading digit or leading digit combination found in The Expected Count column shows the count calculated according to the Benford formula. The Z-Stat Ratio column shows the Z-statistic for each digit combination. See ZSTAT(). Graph/Visualization Output You can also send the command output to a visualization. Tip: Click the [More] button and select the Visualization option from the pull-down list in the “Output Options” group box. The visualization shows actual count, expected Benford count and the upper and lower bounds if the Include Upper and Lower Bounds check box on the [Main] dialog is selected. Analysis of a single leading digit produces 9 data points on the x-axis, two leading digits produces 90 points, three leading digits produces 900 points and so on. An alternate way to create a visualization is to simply double-click the command or table output in the Command Log. For more information on working with visualizations, see Graphs/Visualizations. Drill Down Results from the Benford command can be clicked on to drill down on the supporting data on a record basis. For more information on drill down capabilities in tables or tabular results in the Command Log, see Drilling Down into Tabular Data. For more information on drill down capabilities in graphs, see Graphs/Visualizations. Command Mode Syntax BENFORD ON numeric-field ¿ numeric field specifies the numeric field to analyze. LEADING specifies the number of leading digits to be analyzed. Assign n a positive value of 1 to 6. The default is 1. BOUNDS includes the upper and lower bounds in the output. Examples To perform a Benford analysis on the amount field to two digits and include bounds: BENFORD ON Amount LEADING 2 BOUNDS TO SCREEN The results appear in the Command Log or can be graphed. Working with Commands > Commands > Calculate Calculate The Calculate command computes and displays the value of an expression. This command can be issued only in command mode. For more information, see Using Commands. The result of Calculate may be a character, logical, numeric or date value. By separating the expressions to be evaluated with commas, several calculations can be performed simultaneously. Calculate provides the functionality of a calculator combined with access to Analyzer functions, variables and the data in the current record. In a numeric calculation, the result has as many decimal places as does the expression component with the greatest number of decimal places. For example, the result of 365.0000/12.0 has four decimal places. See Fixed-point Arithmetic for details. Whenever you use Calculate without providing for the result by using the As modifier, Analyzer names the result with the expression name. See Field Modifiers for more information on the use of “AS”. Prerequisites If the expression contains a table field value, the appropriate table must first be opened. Use the Find, Seek or Locate commands to move to the record to be analyzed by Calculate. For more information, see Find, Seek and Search. Command Mode Syntax CALCULATE expression expression allows you to specify any valid Analyzer expression subject to the prerequisites above. See Expression Builder for details on the order of precedence of evaluation of Analyzer expressions and a list of valid operators. expression name allows you to specify a name for the result of the Calculate command. The name must be surrounded by quotes or be a valid character expression. Default: the expression string. Examples To quickly calculate 4.7 multiplied by 18.5 using Analyzer, enter CALC 4.7*18.5 in the Command Log text box. Click [Accept Entry] or press ENTER to calculate the result. Choose Last Result from the Command Log drop-down list to display the result. Note: Analyzer rounds the result 86.95 to 87.0. See Fixed-point Arithmetic for more details. To calculate the gross margin and gross margin percent for the current record. Using previously defined fields for the sale price and unit cost, calculate the unit gross margin and gross margin percent for the current record. Name the results appropriately and then enter CALCULATE SALEPR-UNCST AS 'MARGIN' in the Command Log text box. Choose Last Result from the Command Log drop-down list to display the result. Working with Commands > Commands > Call Call The Call command allows you to harness the power and ubiquity of the R and Python languages to enhance your analytic power. The Call command can be used to directly run an existing R and Python analytic, that was implemented in another environment, from within Analyzer. The Call command is only available on the command line or in procedures. Note: The Call command is ideally suited for a person who writes their own Python or R analytic but also uses Analyzer as a key tool in their overall analytic process. A familiarity with the Python or R language and the script being run is very helpful to ensure a good result. Users unfamiliar with the Python or R script they wish to run may be more challenged to obtain a good result as the user must know the required folder pathing, script name, and all of the required parameters and the order they need to be passed to the Python or R script. Rather than interface with R and Python on a line-by-line basis, the Call command will automatically supply the currently open primary table as a tab delimited data file to the R or Python routine, so that the called routine can act on the data file in a single operation. In addition to supplying a tab delimited data file to the called R or Python routine, the Call command also allows run-time parameters to be easily passed, such as key values, date ranges and limits. This enables a full and complete integration with virtually any external R or Python routine. For example, if a powerful analytic is already implemented in R or Python, rather than re-inventing the wheel, simply open the appropriate table in Analyzer and use the Call command to run the R or Python script by name. If the result of the R or Python script is a tab delimited file, then the file will automatically be imported back into Analyzer and opened so you can integrate the resulting data set with your remaining Analyzer analysis. You can also issue the Set Environment command to set a Python virtual environment path (rather than using the default Python path) prior to issuing a Call command to run a Python script. You could also use the Set Environment command if your system Python distribution has not been added to the PATH environment, or you want to prioritize one particular system Python distribution over another. The Set Environment command ’s specified path will always be highest in priority. For more information see Set Environment. Note: The Set Environment command is currently not available for use with R. Parameters The Call command has the following optional command parameters: To - use this parameter to specify the name of the tab delimited result file created by the existing Python or R scripts so they can be automatically imported back into Analyzer. If the existing Python or R script do not create a tab delimited result file, then omit the To parameter and the Python or R script can still be run, but no result file will be automatically imported into Analyzer. If the Python or R script creates an another type of result file (for example an Excel spreadsheet) then the user can subsequently manually import the Excel spreadsheet in Analyzer via the Data Definition Wizard or by coding an appropriate Import command into their Analyzer procedure after the Call command. Variables - used to list the values of one or more required input parameters for the Python or R script. The user must know which values are required and what order to list them in for the called Python or R script. If - used to filter the records to be processed for the currently open primary table (if any). Command Mode Syntax Python CALL PYTHON PROCEDURE "full_path\python_script_name.py" ¿ R CALL R PROCEDURE "full_path\r_script_name.rs" Commands > Index Index Menu: Found in the DATA menu Use the Index command to create an index file that allows direct access to the records of a table in a logical rather than physical order. You can have more than one index file relating to a particular table. At any one time only one index file can be active. Note: The Index command cannot be used within a Group command in a procedure. Note: The key field to be indexed cannot exceed 255 characters. For additional information, see Re-ordering Data: Sort and Index. When working with keyed data sources on an Arbutus Server, Analyzer indexing will not be available. Analyzer will automatically use the native index on the host system directly. For more information on native keys see Native and Key Fields. Indexing versus Sorting Sort Order Activating and De-Activating Indexes Indexing Tips Filters and Conditions Parameters Creating an Index Command Mode Syntax Working with Commands > Commands > Index > Indexing versus Sorting Indexing versus Sorting The Index command, as an alternative to the Sort command, is usually faster to run and the resulting index file is generally much smaller. However, processing a large indexed file sequentially is significantly slower than processing a large sorted file. The following table compares the benefits and drawbacks of the Sort and Index commands. Performance Issue Sorting Indexing Execution speed Slower Faster Resulting file size Larger Smaller Required disk space More Less Subsequent processing Much faster Much slower of an entire file Subsequent processing searching for a few Much slower Much faster records Use the Index command as an alternative to the Sort command only when disk space is very limited or you want to quickly find records with a specific key value. Note: If you intend to access more than 10 percent of the records in the file, it is better to sort the data in order to optimize subsequent processing speed. The Index command creates an index file that contains pointers to the records in the open table. These pointers are arranged so that the table can be accessed in a logical order. For example, by turning on the Find Literal option in the Search command, you can locate records with specific key values very quickly, usually in less than one second. Working with Commands > Commands > Index > Sort Order Sort Order The Sort Order option specifies the sort order (sequence) to be used for character fields when using the Sort and Index commands. By default, Analyzer sorts data in increasing byte order. Analyzer offers three sort order options: Byte Order For all languages, the data will be sorted in increasing byte order Tip: You can use the Windows Character Map accessory to view the full Unicode character set in increasing byte order. Phonetic Order For non-Asian languages, the data in the local language will be sorted by grouping together related upper and lower case characters For Asian languages, the data will be sorted in the standard phonetic order Stroke Count (Chinese) Order For traditional and simplified Chinese languages only, the data will be sorted based on each characters stroke count rather than the characters phonetic order Examples Given the data X1, A1, x2, a2, X3, A3, x4, a4: Byte order would arrange these as: A1, A3, X1, X3, a2, a4, x2, x4 Phonetic sort would arrange these as: A1, A3, a2, a4, X1, X3, x2, x4 Case-Insenstive Sorting If case-insensitive sorting is required (i.e. sorting where case is not an issue), convert your character string to uppercase using the UPPER() function. This will ensure values like A1 and a1 are considered equivalent and are grouped together. This may be useful for commands like Classify to minimize the number of classified groups and to group equivalent items together in a single grouping.. Command Syntax SET ORDER n Where n is 0 for Byte order, 1 for Phonetic order, and 2 for Stroke Count order. Working with Commands > Commands > Index > Activating and De-Activating Indexes Activating and Deactivating Indexes There are four ways to activate an index file: When you first create the index, make sure Open Table is checked in the Index dialog. This activates the index when the Index command is executed. If you want to activate an existing index, click the index in the Overview. If you want to activate an existing index, right-click on the index in the Overview and click OPEN. If you want to activate an existing index on the command line or in a procedure, issue a SET INDEX TO index_name command. To check that an index is active, look at the left of the status bar. The window will indicate either “x Records Indexed” or “x Records Filtered Index”, depending on whether or not the index was created using a filter. There are two ways to de-activate an index file: If you want to de-activate an existing index, right-click on the index in the Overview and click CLOSE. If you want to de-activate an existing index on the command line or in a procedure, issue a SET INDEX command. For more information on the SET INDEX command, see Set Index. Working with Commands > Commands > Index > Indexing Tips Indexing Tips Here are some tips for working with indexes: Give the index file a name that references some of the important details about the index such as the table name for which it was created, the key fields indexed upon and so on. The quickest way to identify which index is currently in use is to look for a colored key icon displayed under the open table in the Overview. Note: In an indexed file, numbers are listed before alphabetic characters. Note: When an index is created and named (for example "INDEX_ON_VENDOR") it is displayed in the Overview with the supplied name but is written to the hard-drive with the relevant table name (for example "AP") appended to the front of the index name (for example "AP INDEX_ON_VENDOR.inx). This ensures that users can easily determine which table an index applies to (either by viewing the index name using Windows Explorer or right-clicking on the index in the Overview and choosing Properties). Working with Commands > Commands > Index > Filters and Conditions Filters and Conditions If you create an index using a filter, Analyzer displays “Filtered Index” in the status bar to let you know you are only referencing the part of the file that matches the filter ’s condition. A conditional index eliminates records that do not meet a specified condition. This allows you to create separate index files for subsets of a larger file for later manipulation. Do not include a condition if you want the entire file to be indexed. In addition to an If test, you can specify While, Next and First parameters. Only records from the open table which meet the specified condition will have pointers in the resulting index file. If processing is terminated early by one of these parameters, the subsequent records are not included in the index file and cannot be accessed when using it. Since it may not always be obvious that you are not working with the entire file, to verify that you are working with a filtered index, look for the words “Filtered Index” on the status bar. Only records that meet the index conditions are processed. Working with Commands > Commands > Index > Parameters Parameters In addition to the parameter described below, the Index command has the following command parameters: Ascending, Descending, First, If, Next, To and While. For a description of these parameters, see Command Parameters. For a description of supported field modifiers see Field Modifiers. On Specifies the key fields or expressions to index. In the command dialog, select the fields from the “Index Key Fields” list box or click [Choose] to display the Selected Fields dialog where you can specify ascending or descending order for each key. For more information, see Selection/Edit Dialog Boxes. Working with Commands > Commands > Index > Creating an Index Creating an Index To create a new index: 1. Click [Index] on the button bar or Select Data from the menu and choose Index to display the Index dialog. Note: When working with zSeries data (ISAM, VSAM, DB2 and IMS) with native keys, Analyzer indexing will not be available. Analyzer will automatically use the native index on the host system directly. For more information on native keys see Native and Key Fields. 2. From the Index dialog, select a field from the “Index Key Fields” list box. 3. You can also click [Choose] to display the Selected Fields dialog, which gives you more information about the fields you are selecting. 4. There are three ways to select fields using the Selected Fields dialog: Click the name of the field to select it, then click the right arrow to move it between list boxes. Double-click the name of the field to select it and move it from the Available Fields list box to the Selected Fields list box. Click [Expression] to display the Expression Builder. The Expression Builder lets you apply a condition or filter to the data in the selected field. 5. Select the field you want from the Available Fields list box, then click [OK] to return to the Index dialog. 6. Enter a name for the index file in the Output File text box. 7. Click [OK] to close the dialog and execute the Index command. Analyzer lists the index in the Overview under the associated table layout and under Indexes. You can now use the Search command to access individual indexed records quickly. For more information see Search. To learn more about activating and de-activating indexes, see Activating and Deactivating Indexes. Working with Commands > Commands > Index > Command Mode Syntax Command Mode Syntax INDEX ON key-field... TO file-name ¿ Working with Commands > Commands > Join Join Menu: Found in the DATA menu Use the Join command to combine fields from two tables into a third table based upon common key fields (numeric, character or date) found in both tables. You typically use Join to match records in a transaction table with those in a master table. You can, for example, match invoice data in an accounts receivable table to data in a master customer table or you can compare the contents of two tables. When you join a transaction table and a master table, the transaction table is usually the primary table and the master table is the secondary table. Carefully identify your primary and secondary tables because results differ if you reverse the tables, depending on the type of join. To join two tables, choose the join key(s) from the primary and secondary tables. By default all fields from both tables are included in the joined output table. To select specific fields to output (including key fields), click [More] and select the primary and secondary fields from the “Fields to output” list boxes. Join Versus Relations Join Types Join Key Fields Join Considerations Output Field Names Source Table Locations Preparing Key Fields Auto-Harmonizing Key Fields Parameters Command Mode Syntax Working with Commands > Commands > Join > Join Versus Relations Join Versus Relations Join and Relations both allow you to work with data from more than one table. Join provides several include/exclude options that are not available through Relations. However, certain kinds of analyses are more easily performed with Relations. Depending on the type of analysis required, one command may be more efficient than the other. Join allows you to combine data from two tables, with output going to a third table. The output can consist of matched or unmatched data, depending on the type of Join you specify. Relations allows you to create virtual joins between tables. Data from separate but related tables can be analyzed as though it existed in a single table. In both cases, you need a key field common to the two tables to be joined or related. Join uses more disk space than Relations for three reasons: For a Join, you must sort the secondary table and preferably sort or index the primary table also. However, to relate two tables, you need to index only one table. Sorting creates a new table as large as the original one. Indexing creates a small index file that points to the original table. Join creates a new table that can be larger than both the primary and secondary tables together, depending on the type of Join performed. Relations does not write a new table. If you need a table that contains the related data, you can create one by extracting the related fields using the Extract command. To join four tables, you need to perform three sorts and three joins. To relate four tables, you need the original table and three indexes on the other tables. For more information, see Relations. Working with Commands > Commands > Join > Join Types Join Types The Join command has six join types, based on key field comparisons. The output option you choose determines which records are included in the output table. The types of Joins are detailed in the table below Matched Matched Unmatched Unmatched Option Primary Secondary Primary Secondary Matched X X Primary All Primary X X X Matched Primary X X X and All Secondary All Primary and X X X X Secondary Unmatched X Primary Matched Many to X X Many (Keyed) Many to Many X X X X (Unkeyed) Join types can be chosen in the Join command dialog from either the Join type pull-down menu or by clicking on the adjacent Join type Venn Diagram. Matched Primary Records Only Creates an output table of selected fields from the primary and secondary tables for primary table records with key field matches in the secondary table. If there is more than one key field match in the secondary table, Analyzer uses the first matched record it finds. If no matching record is found, no output record is produced. Note: This is the default option. All Primary Records Creates an output table of selected fields from the primary and secondary tables for all records in the primary table, including those with no matches in the secondary table. The fields for those records with no secondary table match are filled with blanks or nulls, depending on the field type. Non-matching records from the secondary table are ignored. Matched Primary and All Secondary Records Creates an output table of selected fields from the primary and secondary tables for all records from the secondary table, including those with no match in the primary table. Fields for those records with no match in the primary table are filled with blanks or nulls, depending on the field type. Non-matching records from the primary table are ignored. All Primary and Secondary Records Creates an output table of selected fields for all records from both primary and secondary tables, whether the records have matches or not. The fields for unmatched records from either table and duplicates from the secondary table are filled with blanks or nulls, depending on the field type. Unmatched Primary Records Only Creates an output table of all records from the primary table that had no matches in the secondary table. The output table includes primary fields only because there is no secondary information if records have no match. Matched Many-to-Many Records Creates an output table of selected fields from the primary and secondary tables for primary table records with key field matches in the secondary table. Tip: It is strongly recommended that you apply a filter to a matched many-to-many Join to manage the size of the resulting table. Note: When filtering either of the many-to-many Joins, Analyzer allows secondary fields to be referenced in the filter. This option differs from the Matched Primary Records option—if there is more than one key field match in the secondary table, Analyzer uses all matched records it finds. If no matching record is found, no output record is produced. Note: When filtering either of the many-to-many Joins, Analyzer allows secondary fields to be referenced in the filter. Users with an Arbutus zSeries Server connection to an IMS database can also perform joins that include an IMS dataset as the secondary table. Select this option to perform a one-to-many type of join with an IMS dataset as the secondary table. Note: Analyzer supplies an error message to the Command Log if the IMS secondary table is not HDAM. Example This example illustrates an keyed matched many-to-many Join on two monthly files containing a customer number key field (NO) and a monthly balance (BALANCE): January Data February Data No Balance No Balance 001 100.00 001 150.00 001 200.00 001 250.00 002 300.00 003 350.00 The result of a matched many-to-many Join would be: Many-To-Many Joined Data January February No No Balance Balance 001 100.00 001 150.00 001 100.00 001 250.00 001 200.00 001 150.00 001 200.00 001 250.00 Many-to-Many Records This Join replicates a true SQL Join. It is un-keyed and matches every record in the primary table with every record in the secondary table. Tip: It is strongly recommended that you apply a filter to a many-to-many Join to manage the size of the resulting table. Consider that un-unfiltered many-to-many Join of two tables (one with 7,000 records and one with 5,000 records) will create an exponentially larger table (7,000 X 5,000 = 35,000,000 records in the output table). This option differs from the Matched Many-To-Many option which is a keyed Join matching every key in the primary table with every matching key in the secondary table. Note: When filtering either of the many-to-many Joins, Analyzer allows secondary fields to be referenced in the filter. Note: When filtering either of the many-to-many Joins, Analyzer allows secondary fields to be referenced in the filter. Example This example illustrates an un-keyed many-to-many Join on two monthly files containing a customer number (NO) and a monthly balance (BALANCE): January Data February Data No Balance No Balance 001 100.00 001 150.00 001 200.00 001 250.00 002 300.00 003 350.00 The result of an unkeyed many-to-many Join would be: Many-To-Many Joined Data January February No No Balance Balance 001 100.00 001 150.00 001 100.00 001 250.00 001 100.00 003 350.00 001 200.00 001 150.00 001 200.00 001 250.00 001 200.00 003 350.00 002 300.00 001 150.00 002 300.00 001 250.00 002 300.00 003 350.00 Working with Commands > Commands > Join > Join Key Fields Join Key Fields The key fields are the common elements that allow the data from two tables to be joined. The key fields in both primary and secondary tables (fields or expressions) must be identical in generic data type and structure. Sometimes it is necessary to modify or normalize the contents of a field before it can be used as a key field. You can use Analyzer functions in the Expression Builder to convert fields to the same generic data type, combine two fields into one, shorten fields or modify their structures (like case). See Expression Builder for more details. When considering fields to use as your key fields, examine the fields for compatibility: Length - are character keys the same length. If not, consider using functions like SUBSTRING() in a computed field to adjust the length if reasonable. Case - are character keys the same case - the Join command is case sensitive when comparing character keys. If not, consider using functions like UPPER() or LOWER() in a computed field to adjust the case if reasonable. Justification - is data justified the same way in both character keys. If not, consider using functions like LTRIM() or ALLTRIM() in a computed field to adjust the justification if reasonable. Data Type - are keys the same type of data (character, numeric, date). If not, consider using functions like VALUE() or STRING() in a computed field to adjust the data type if reasonable. Note: Altering your Join key data requires careful consideration - adjusting the length, case, justification or data type could fundamentally change your Join results. Only make alterations to key fields that are reasonable for the keys and that maintain the integrity of the underlying data. Working with Commands > Commands > Join > Join Considerations Join Considerations To identify transactions for which no master table records exist, specify the transaction table as the primary table. And, to identify master records with no matching transactions, specify the master table as the primary table. When using computationally intensive Analyzer functions to process fields in the secondary table (for example harmonizing key fields for a Join using the NORMALIZE() function), consider performing a field extract to extract all required secondary fields to a new secondary table prior to performing a Join. This will ensure a much more efficient Join because all computations in the computed fields will be resolved once as part of the field extraction rather than repeatedly as part of the Join. Working with Commands > Commands > Join > Output Field Names Output Field Names If any fields in the primary and secondary tables have the same name, Join automatically assigns a new name to the secondary field in the output table. For example, if both tables contain an Amount field, Join tries to name the secondary field Amount2, Amount3 and so on, until it finds a name that does not conflict with any field names in the output table. Tip: In procedures and command mode, use the AS field modifier to specify unique names for your Join fields. For more information see As. Working with Commands > Commands > Join > Source Table Locations Source Table Locations Tables to be joined must belong to the same Analyzer Project. In addition, since you can only log on to one Arbutus Server at a time, server tables to be joined must reside on the same Arbutus Server and use the same server profile. This means you cannot join a local table to a server table nor can you join tables on different Arbutus Servers. Tip: Use the Extract command to move local tables or tables from different Arbutus Servers to the same Arbutus Server for joining. Note: When using Analyzer to join IMS segment paths from the same IMS data base, if a PCB isn ’t available for each segment path then the message "Insufficient PCB's for multiple path processing." is displayed and the command in progress is cancelled. For more information, please see “The Arbutus zSeries Server Program Specification Block” in the Arbutus zSeries Server Installation Guide. Working with Commands > Commands > Join > Preparing Key Fields Preparing Key Fields The Join command requires that you sort the secondary table in ascending sequence on the key fields. If the secondary table is not already sorted, select the Presort Secondary Table checkbox in the Join dialog. While not mandatory, it is also recommended that you sort the primary table on the key fields. You can do this by selecting the Presort Primary Table checkbox in the Join dialog. Alternatively, you can index the primary table instead of sorting it. You can use Join with an unsorted primary table, but for large joins, this would increase processing time dramatically. If the primary table is partially sorted on the key field, for example, when a table is sorted by account code for individual months, processing time increases, but not to the same extent. You can select more than one key field. You might want to do this, for example, if you had one supplier with the same vendor number in different locations. To capture all records for this vendor, regardless of location, you could select both supplier and location fields as key fields. You would then also need to select two corresponding key fields in the secondary table. Working with Commands > Commands > Join > Auto-Harmonizing Key Fields Auto-Harmonizing Key Fields Key fields may be numeric, character or date fields. Join supports a variety of key comparisons between two tables (character to numeric, numeric to numeric, character to character and date to date). Normally, keys are of the same type, but when they are different, Join will attempt to harmonize them. Join will only attempt to harmonize keys when the same number of key fields have been selected from each table. When multiple keys fields are selected they will be harmonized in pair sets; first primary key with first secondary key, etc. In the case of character to numeric key comparisons, Analyzer will prompt the user to auto-harmonize the disparate keys, and if accepted, Analyzer will convert the character key to a numeric key. In the case of character to character key comparisons where the key lengths differ, Analyzer will prompt the user to auto-harmonize the keys, and if accepted, the length of the shorter character key will be padded with the appropriate number of blanks. Date to Date key field comparisons don’t require auto-harmonization as all dates are stored in a consistent internal date format. Date to Numeric or Date to Character comparisons are technically allowed but are not auto-harmonized. These comparisons will most likely yield no results and so should be avoided. In any case, auto-harmonizing may result in unintended Join results. The alternative, if your key fields are not identical, is to manually harmonize them using a variety of useful Analyzer functions which can convert the data type, structure or length of the keys to make them suitable for use as key fields. Tip: To convert numeric fields to character format, see STRING(). To convert positive numeric fields to a character format with leading zeros see ZONED(). To alter the length of a character field, see SUBSTRING(). To convert character fields containing only numeric digits into numeric fields, see VALUE(). Also see Expression Builder and Working with Functions details on other functions you can use to harmonize your key fields. Working with Commands > Commands > Join > Parameters Parameters In addition to the Join type parameters described in Join Types and the remaining command specific parameters described below, the Join command supports the following common command parameters: Append, First, If, Open, Next, Noformat, To and While. For a description of these parameters, see Command Parameters. Secondary Table In the command dialog, this option allows you to choose the secondary table from the drop down list that you want to join to the primary table. Primary Keys The PKEY command parameter specifies which key fields or expressions from the primary table to use. In the command dialog, select the key fields from the “Join Key Fields” primary list box or click [Choose] to display the Selected Fields dialog. For more information, see Selection/Edit Dialog Boxes. Presort The PRESORT command parameter specifies that the primary table is to be sorted on the primary key fields before joining the tables. In the command dialog, select the Presort checkbox beside the primary field list box. Secondary Keys The SKEY command parameter specifies which key fields or expressions from the secondary table to use. In the command dialog, select the key fields from the “Join Key Fields” secondary list box or click [Choose] to display the Selected Fields dialog. Secsort The SECSORT command parameter specifies that the secondary table is to be sorted on the secondary key fields before joining the tables. In the command dialog, select the Presort checkbox beside the secondary field list box. Primary Fields The FIELDS command parameter specifies which primary table fields or expressions to include in the output table. In the command dialog, click the [More] button, then select the fields from the Fields to Output - Primary list box or click [Choose] to display the Selected Fields dialog. Key fields are not automatically included in the output so you must select them if you need them. Secondary Fields The WITH command parameter specifies which secondary table fields or expressions to include in the output table. In the command dialog, click the [More] button, then select the fields from the Fields to Output - Secondary list box or click [Choose] to display the Selected Fields dialog. Key fields are not automatically included in the output so you must select them if you need them. Working with Commands > Commands > Join > Command Mode Syntax Command Mode Syntax Before issuing the Join command, open the primary table and then open the secondary table using the following command: OPEN “Secondary_Table" SECONDARY Then issue the Join command. JOIN {PRIMARY|SECONDARY|PRIMARY SECONDARY|UNMATCHED|MANY|SQL} PKEY primary-key-field-list FIELDS primary-field-list SKEY secondary-key-field-list Commands > Let Let You can use the Let command to store a value as a variable. Although similar to the Assign command, Let allows multiple variable assignments on a single line rather than just one. Let does not support a condition Let is typically used in a procedure in which you want to create or change the value of one or more variables on a single line. Let can be used as a separate command or in a group. A Let command in a group performs the variable assignment each time the command is encountered, if the test evaluates true for that record at least once per record. For more information on assigning variables see Assign. For additional information see User-Created Variables. Command Mode Syntax LET variable-name1 = expression1 variable-name2 = expression2... variable-name = expression is interpreted automatically as an assignment. variable-name specifies the name of the variable to be created. All field naming conventions apply. See Naming Fields for more details. expression specifies the value to be assigned to the variable. All Analyzer expression conventions apply. Examples In the first example, the value of the product class for the current record is assigned to a user created variable named MPRODCLS by entering the command: LET MPRODCLS = PRODCLS Because MPRODCLS is a variable, its value will not change unless explicitly changed by another Let or Assign command. In the next example, the variables MPRODCLS and SAMPLE_QTY will be assigned values at the same time: LET MPRODCLS = PRODCLS SAMPLE_QTY = 1 Working with Commands > Commands > List List Use the List command to display selected output data in a columnar list format. For example, you can use List to review the detailed contents of a table. This command can be issued only in command mode. For more information, see Groups. This command allows you to select information from the table and print it to the screen, send it directly to a printer or save it to a text file. Unless otherwise specified, output is sent only to the screen. You can include the optional Line parameter to create multi-line listings. Page numbers, date, time, user identification and column headings are provided automatically. Numeric columns are also automatically totalled. You can specify multi-line, as well as centered headers and footers using the Header and Footer parameters. Because column headings are determined by the first line of fields, make sure you specify appropriate headings on the first line. Use the appropriate WIDTH modifier to control horizontal positioning on each line and ensure fields are aligned vertically the way you want them. Note: Arranging a View is a more powerful way of listing and reviewing table data. See Displaying a View for details. Parameters In addition to the command parameters described below, the List command has the following command parameters: To. For a description of these parameters, see Command Parameters. For a description of supported field modifiers see Field Modifiers. Header Header inserts text at the top of each page. This overrides any value set by a header variable. Footer Footer inserts text at the bottom of each page. This overrides any value set by a footer variable. Fields Indicates that the fields following this keyword are to be included in the list on line one. Line Specifies additional output lines. For multi-line reports, column headings are determined by line 1. Headings for other lines will not be output. If necessary, adjust the headings on line 1 to reflect the information on the other lines. Skip Causes Analyzer to insert the specified number of blank lines between each record in the list. For example, LIST ALL SKIP 1 produces a double spaced list (one blank line between each record). Note: When creating a multi-line report, you may find using field modifiers such as Width and As, useful for aligning columns or adjusting headings. See Field Modifiers for details. Unformatted Suppresses the special list pagination features and the space between fields (undelimited SDF format). It creates output identical to that created by the Export ASCII command. Use it with the “TO text-file- name” parameter to produce output that can be further processed by other software programs. Command Mode Syntax LIST { list-fields|ALL} LINE n list-fields ¿ list-fields specifies the fields to be listed. Any field or expression may be used. Enter ALL to list every defined field. When used with the LINE parameter, list-fields specifies the fields or expressions to be listed on that line. Line n can be a value between 2 and 60. Examples You can use List on an inventory file to list the fields product number, product description, quantity on hand, unit cost and recorded value for records with a negative unit cost or quantity on hand. You can also add a descriptive header to the list. Enter the following into the Command Log text box: LIST PRODNO PRODDESC QTYOH UNCST VALUE IF QTYOH < 0 OR UNCST < 0 HEADER ‘NEGATIVE VALUES ’ The result of the List command appears in the Command Log. Analyzer informs you that only five records meet the test. These are unusual items and warrant further investigation. Working with Commands > Commands > Locate Locate Use the Locate command to move to a specified record number or to the first record in a file meeting a specified condition. Note: The Locate command is the Locate Record and Locate If command options of the Search command. To access the Search command, select Tools from the menu and choose Search. For more information, see Search. Locate is usually much slower in finding a specific record than the Find Literal option of the Search command because Locate reads the file sequentially. However, unlike the Find Literal option, Locate does not require the file to be indexed first. The search always starts at the first record (unless you specify the While or Next parameters in the command) and does not rely on any predefined key fields. The processing speed is proportional to the size of the file and the location of the record in it. Locate Record is often used as a fast way of moving to a specific record number in a fixed record length file. You can also include the Next and While parameters in a Locate command. Parameters The Locate command has the following command parameters: If, First, Next, and While. For a description of these parameters, see Command Parameters. Command Mode Syntax LOCATE ¿ LOCATE RECORD n¿ Note: The number n is the record number to be located. Examples To locate the first inventory item beginning with the word PLYWOOD in an inventory file sorted by product description, use Locate. Because the file is sorted on the product description, all such items are grouped together, facilitating a quick analysis of plywood products. Enter the following into the Command Log text box: LOCATE IF PRODDESC = 'PLYWOOD' The Command Log indicates the first record number that meets the condition. Working with Commands > Commands > Loop Loop Use the Loop command within a Group command to process a record more than once. Loops are frequently used when a record contains repeated information, sometimes called buckets or segments, that you want to process. Commands between Loop and End are executed repeatedly on the current record until the result of the test is false. Loops can only be used within groups. Before you create a loop, you should be familiar with the concepts involved in creating groups. For more information, see Group. For a more detailed description using the Loop command to process multiple record type files see Multiple-Record-Type Files. Parameters The Loop command has the following parameter: While. For a description, see Command Parameters. Command Mode Syntax Each Loop command must be closed with an End command. Within a Loop, you must specify While parameters to test for a condition. The commands between Loop and End are executed repeatedly for the current record as long as the While test specified is true. If the While test is initially false, the commands are not executed. Note: To avoid creating an infinite loop, make sure that the test you specify eventually returns false. You can also use the Loop option of the Set command to prevent infinite looping. See Set Loop for details. LOOP WHILE test¿ commands¿...¿ END¿ Examples The following Loop command totals 100 bucketed fields in a record, assuming each bucket is five bytes long: GROUP¿ COUNT=0¿ TOTAL=0¿ LOOP WHILE COUNT Commands > Merge Merge Menu: Found in the DATA menu Use the Merge command to combine two sorted tables with identical record structures into a third table. For example, you can use Merge to combine two tables with identical record structures from different time periods, different company branches and so on. To combine fields from a Master and Transaction table that have different structures, see Join. Merge places the merged output records from both the primary and secondary tables into a third output table. All aspects of the records remain unaltered and the sort sequences are maintained. For each key field value, the records of the primary table are placed before those of the second. The primary and secondary tables must have identical structures, although the fields for the two tables do not have to be named identically. The Merge command produces a table along with a table layout. This new table can be used for further analysis and reporting with Analyzer. Note: The If, While, First and Next parameters that limit records selected for output only apply to the primary table. See Extract for other possible alternatives for merging data. Prerequisites For command line use, you must open the secondary table, as well as the primary table, before issuing the Merge command. The primary and secondary tables must have identical record structures. If this is not the case, the results are unpredictable. Both the primary and secondary tables must be sorted in ascending sequence on common key character fields. You can check the Presort checkbox in the Merge dialog to sort the primary table. The primary table can be indexed instead of sorted, if desired. The key fields must be character fields. If they are not, use the STRING() function to convert the key fields to character fields or the DATE() function to convert a date. The total length of the character fields must be the same in both tables. The tables to be merged must be defined within the same Analyzer Project. When working with table data on an Arbutus Server, tables to be merged must reside on the same server. You cannot merge a table on the local drive to a table on the server. Note: When using Analyzer to merge IMS segment paths from the same IMS data base, if a PCB isn ’t available for each segment path then the message "Insufficient PCB's for multiple path processing" is displayed and the command in progress is cancelled. For more information, please see “The Arbutus zSeries Server Program Specification Block” in the Arbutus zSeries Server Installation Guide. Parameters In addition to the command parameters described below, the Merge command has the following command parameters: Append, First, If, Open, Next, Noformat, To and While. For a description of these parameters, see Command Parameters. For a description of supported field modifiers see Field Modifiers. Secondary Table In the command dialog, this option allows you to choose the secondary table from the drop down list that you want to merge with the primary table. Note: For the Merge command, the secondary file must already be physically in the appropriate sorted order. Primary Keys The PKEY command parameter specifies which key fields or expressions to merge from the primary table to use. In the command dialog, select the key fields from the “Merge Key Fields” primary list box or click [Choose] to display the Selected Fields dialog. For more information on the Selected Fields dialog, see Dialog Boxes. Presort The PRESORT command parameter specifies that the primary table is to be sorted on the primary key fields before joining the tables. In the command dialog, select the Presort checkbox beside the primary field list box. Secondary Keys The SKEY command parameter specifies which key fields or expressions to merge from the secondary table to use. In the command dialog, select the key fields from the “Merge Key Fields” secondary list box or click [Choose] to display the Selected Fields dialog. Command Mode Syntax MERGE ON key-field-list¿ or MERGE PKEY primary-key-fields SKEY secondary-key-fields¿ Note: Use the first variation of the command syntax when the key field names in the two tables are identical and are fields, not expressions. Otherwise, use the second variation. You can add the following options to either syntax format: TO create-table-name ¿ Note: You cannot use the presort keyword within a group running in procedure mode. Examples To merge two years of accounts receivable data into one table (AR_2011 and AR_2010), ensure that both tables are sorted on the common account number key (No). Open AR_2011 as your primary table, then in the Merge dialog, select AR_2010 as your secondary table. Identify the account number field (No) as the key field for both tables. Click [OK] to create the merged table. The new merged table, AR_Merge, lists (by account number) all of the primary records prior to those of the secondary records for each key value. All aspects of the records remain unaltered and the sort sequences are maintained. Working with Commands > Commands > Notify Notify Menu: Found in the TOOLS menu Use Notify to send file or procedure information in the form of a message or an attachment to local or remote e-mail addresses. You can send the message to more than one person provided your e-mail system supports multiple recipients. Note: The Notify command cannot be issued from within a Group command. Notify uses the SMTP and SMTPS mail server protocols and supports the mail encryption methods SSL Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL). Notify is a send only utility and is not intended for use as a full-scale mail system. Note: The Notify command does not send authentication details (both username [sender] and password) to the mail server unless a password has been entered. Note: If you are using a Microsoft e-mail application, ensure that your e-mail application is running before you start the procedure. Otherwise, a Windows message appears, prompting you to choose an e- mail profile. This prompt can cause the procedure to hang. Setting up E-mail Server Information Before using the Notify command, you should set up your default e-mail server information within the Notify tab of the Tools/Options dialog: Sender Password Primary SMTP/SMTPS mail server Optional Secondary SMTP/SMTPS mail (fail-over) server Port If this information is pre-specified within the Notify options tab, it will then be used to automatically populate the relevant dialog information whenever the Notify command dialog is opened. For more detailed information see Notify Options. Parameters The Notify command has the following command parameters: Sender Specifies your user name for a local system or your e-mail address for an SMTP mail server. Multiple senders can be specified separated by commas when specifying multiple mail servers. Note: This value will be auto-populated if a Sender has been specified in the Notify Options. The auto- populated value can be manually overwritten in the Notify command dialog. Password Specifies your password for local e-mails. Note: This value will be auto-populated if a Password has been specified in the Notify Options. The auto-populated value can be manually overwritten in the Notify command dialog. Note: Do NOT use the NOTIFY password parameter if your SMTP server does not require authentication. SMTP Server [Mailbox] Specifies a path to a primary SMTP/SMTPS mail server name. Manually, additional secondary SMTP (fail-over) mail servers can be specified, separated by commas. The secondary SMTP/SMTPS (fail-over) mail server will only be used should the primary SMTP/SMTPS mail server fail. Specifying a secondary mail server is only appropriate when no credentials are required, or the credentials are the same on both mail servers. Note: The primary SMTP mail server will be auto-populated if an SMTP Server has been specified in the Notify Options. The auto-populated value can be manually overwritten in the Notify command dialog. Note: A secondary SMTP (fail-over) mail server can also be specified in the Notify Options. Port Specifies the port to be used by the SMTP/SMTPS mail server. If no port is specified then Analyzer attempts to use following possible ports: 25, 465, 587 The port number should be specified if your mail server uses a different port than the defaults shown above. Note: This value will be auto-populated if a Port has been specified in the Notify Options. The auto- populated value can be manually overwritten in the Notify command dialog. To [Address] For local or remote e-mail recipients, specifies the names or addresses of local e-mail recipients. Use commas to separate the names or addresses of multiple recipients. Note: For remote e-mails recipients, specify the SMTP server in the STMP Server text box. Bcc Specifies the names of blind carbon copy recipients for local e-mails and the e-mail addresses of recipients for remote e-mails. Cc Specifies the names of carbon copy recipients for local e-mails and the e-mail addresses of recipients for remote e-mails. Subject Specifies the text of the subject line. Text [Message] Specifies the text of the message. Attachment Specifies the path and file name of an attachment. Click [Browse] to open the Select File dialog. Multiple attachments can be sent (separated by commas only - no spaces) - use the Browse button to launch a Select File dialog, then use CTRL-click or Shift-click to choose multiple attachments (provided the attachments are in the same folder). Only Send If Specifies an If test on the Notify command (as if preceding the Notify command with an If command). This means that the If test in the Notify command is only evaluated once prior to running the Notify command - it is not evaluated for each record (as is normally done for an If parameter). In the Notify command dialog, users can click the Expression button to launch the Expression builder to assist in building a logic expression. Specifying Secondary (Fail-Over) Mail Servers As mail servers may fail on occasion, the Notify command allows manual specification of one or more secondary SMTP/SMTPS (fail-over) mail servers should the primary SMTP/SMTPS mail server fail. This also requires that a valid sender is manually provided for each specified mail server (unless one sender is valid for all specified mail servers). Senders should be listed in the same order as the SMTP/SMTPS mail servers are listed. Secondary (fail-over) mail servers should only be specified when no credentials are required, or the credentials are the same on both mail servers. When specifying secondary SMTP/SMTPS (fail-over) mail servers, the intended sender for each secondary SMTP/SMTPS (fail-over) mail server should be tested to ensure it is valid. Should the primary SMTP/SMTPS mail server fail, the Notify command will attempt to use the secondary (fail-over) SMTP/SMTPS mail servers in the order listed, until the email can be sent or the last secondary SMTP/SMTPS (fail-over) server fails. See examples below for specific details. Command Mode Syntax NOTIFY SENDER "sender_email" MAILBOX "pathname" PORT "port_number" ADDRESS "recipient" ¿ Note: The If test in the Notify command is only evaluated once prior to running the Notify command - it is not evaluated for each record (as is normally done for an If parameter). Examples Notify Command Syntax - Standard An example of the syntax for a standard Notify command: NOTIFY SENDER "[email protected]" MAILBOX "mail.gmail.com" PORT 587 ADDRESS "[email protected]" SUBJECT "Test from Analyzer" MESSAGE "Test"¿ Notify Command Syntax - Standard with an If Test and Attachment An example of the syntax for a standard Notify command: NOTIFY SENDER "[email protected]" MAILBOX "mail.gmail.com" PORT 587 ADDRESS "[email protected]" SUBJECT "Test from Analyzer" MESSAGE "Test" ATTACHMENT "C:\Data\Excel_V1.xlsx" If WRITE1>0¿ Notify Command Syntax - Multiple Attachments An example of the syntax for a Notify command with multiple attachments (command mode only): NOTIFY SENDER "[email protected]" PASSWORD "XY30D69A1420D5C98B7DB6779E307C2B166025A8D3564799ECFCEEDC2134BF85" MAILBOX "mail.gmail.com" ADDRESS "[email protected]" PORT 587 SUBJECT "Test from Analyzer" MESSAGE "Test" ATTACHMENT "C:\Data\Excel_V1.xlsx,C:\Data\Excel_V2.xlsx"¿ Note: The attachment list must be entered without extra spaces, with commas separating each attachment, and with double quotes surrounding the full attachment list. Spaces that are pat of the path or file name are valid, but no extra spaces can be added. Notify Command Syntax - Single Sender, Multiple Secondary (Fail-Over) Mail Servers An example of the syntax for a Notify command with one valid sender and multiple secondary SMTP (fail-over) mail servers: NOTIFY SENDER "[email protected]" MAILBOX "mail.gmail.com,mail.verizon.com,mail.sprint.com" PORT 587 ADDRESS "[email protected]" SUBJECT "Test from Analyzer" MESSAGE "Test"¿ In the example above, only one sender was specified, but that sender is valid for all specified SMTP/SMTPS mail servers. Note that the primary SMTP/SMTPS mail server is listed first, followed by the list of secondary SMTP/SMTPS (fail-over) servers in the order they should be attempted. Should the primary SMTP/SMTPS mail server fail, then Notify will attempt to send the e-mail with the same sender using the specified

Use Quizgecko on...
Browser
Browser