ArcGIS Pro Exam Study Notes PDF
Document Details
Uploaded by ProfoundNewOrleans
Tags
Summary
These notes describe coordinate systems, change detection using imagery (Landsat 8), and the Normalized Difference Vegetation Index (NDVI). They also discuss the Normalized Burn Ratio (NBR) and its use in land resource management.
Full Transcript
Intro to Coordinate Systems Geographic Coordinate systems Earth-centered datum - reasonable fit over whole globe Local datum - good fit in local area Projected Coordinate systems Type based on developable surface: For map areas that extend north–south, use a cylindrical pro...
Intro to Coordinate Systems Geographic Coordinate systems Earth-centered datum - reasonable fit over whole globe Local datum - good fit in local area Projected Coordinate systems Type based on developable surface: For map areas that extend north–south, use a cylindrical projection. For map areas that extend east–west, use a conic projection. For map areas that have equal extent in all directions, use an azimuthal projection. Type based on spatial property: Conformal projections preserve shape but not area Equal-area projections preserve area but not shape. Equidistant projections preserve true scale between one or two points to every other point on the map, or along every meridian Azimuthal projections preserve direction from one or two points to every other point. Gnomonic projections preserve the shortest route (distance and direction) but cannot preserve area. Compromise projections try to balance shape and area distortion. No flat map can be both equal area and conformal Change Detection Using Imagery Landsat 8 11 values and only values 1-4 and 8 = visible light Acquire temporal multispectral imagery Internal Imagery must be available at least for the start and end of the change period (know the satellite launch dates). Intermediate images may be required for trends. Carefully select the image based on the periodicity of the data acquisition and the change interval length (temporal resolution). Imagery usually should be from the same time of year for each year and cover the same area of interest. Imagery should be of similar spatial, spectral, and radiometric resolution (not always possible). Images must be carefully registered and share projection information. Perform change detection This task is a two-step process. Create a custom band combination for visually detecting change. o Display imagery with different spectral band combinations (enhance the imagery, if necessary, for better visualization). o Create a custom band combination. Apply a suitable change detection process to quantify the change. o Apply suitable image algebra (differencing, ratios, and indices). o Apply automated analysis of images using image classification techniques and so on. Interpret imagery and report results Classify (map) each image from different dates. Use comparable classification schemes and strategies. Use visual interpretation. NDVI (Normalized Difference Vegetation Index) NDVI is a measure of healthy, green vegetation. This standardized index allows you to generate an image displaying greenness (relative biomass). The NDVI algorithm subtracts the red reflectance values from the near-infrared (NIR) band and divides it by the sum of NIR and red bands. The NDVI takes advantage of the contrast of the characteristics of two bands from a multispectral raster dataset, the chlorophyll pigment absorptions in the red band, and the high reflectivity of plant materials in the NIR band. The differential reflection Internal in the red and infrared (IR) bands enables you to monitor density and intensity of green vegetation growth using the spectral reflectivity of solar radiation. Calculations of the NDVI for a given pixel result in a number that ranges from -1 to +1. Positive values indicate that there is greater reflectance in the NIR as compared to visible (red). The larger the NDVI value, the greater the density of healthy vegetation. Low to zero values indicate minimal vegetation. Values close to zero and negative values generally correspond to impervious surfaces and water. Any negative values are mainly generated from clouds, water, and snow. Values near zero are mainly generated from rock and bare soil. Very low values (0.1 and below) correspond to barren areas of rock, sand, or snow. Moderate values (0.2 to 0.3) represent shrub and grassland. High values (0.6 to 0.8) indicate temperate and tropical rain forests. NBR (Normalized Burn Ratio) Land resource managers and fire officials use burn-severity maps from remote sensing instruments to predict areas of potential fire hazards, map fire perimeters, and study areas of vegetation regrowth after fires. Landsat imagery has traditionally been used to create indices that indicate burn severity because of its repeated coverage, ease of access, and spectral wavelengths. The normalized burn ratio (NBR) highlights burned areas and is used to index the severity of a burn using multispectral imagery. The NBR algorithm requires a shortwave infrared (SWIR) band, so its implementation is limited to those imaging platforms that have a shortwave band (Landsat, MODIS). The formula for the NBR is similar to that of the NDVI except that it uses near-infrared (NIR) Band 4 and SWIR Band 7 wavelengths. The NBR takes values ranging between -1 and 1. In vegetated areas, it takes positive values, while its negative values correspond to bare soil. In burned areas, NBR values decline at the same time as the fire severity increases. Internal dNBR range Burn-severity level < -0.25 High post-fire regrowth -0.25 to -0.1 Low post-fire regrowth -0.1 to 0.1 Unburned 0.1 to 0.27 Low-severity burn 0.27 to 0.44 Moderate- to low-severity burn 0.44 to 0.66 Moderate- to high-severity burn > 0.66 High-severity burn Getting Started with Geoprocessing Internal Application settings: System-wide defaults. Saved to geoprocessing settings Tool settings: Temporarily overrides application settings. Not saved anywhere. o Script: Can override passed down settings Model settings: Can override tool and application-level settings. Saved with model. o Model process settings: Can override model-level settings. Saved with model Script: Can override passed down setting The levels of environment settings show which settings are overridden and how they are saved. Environment settings apply to geoprocessing tools that are run within a project. Building Geoprocessing Models Using ArcGIS Pro Planning your analysis When you are performing a type of GIS analysis or completing a data management project and you want to use ModelBuilder, you should consider the following questions before you begin building your model. Questions to consider before building a model What is the goal of the model that you want to build? What data do you need to use in the model? What is the most effective workflow to follow to achieve your goals? If you are planning to build a model, you can follow this set of guidelines to help you determine your goals and the data and tools that you will need when you are ready to build your model. ModelBuilder planning guidelines Determine the scenario and criteria for analysis or data management, and set goals for your model. Explore and gather necessary data, which may involve creating and editing data. Choose tools that will enable you to achieve your goals. Choosing tools requires that you thoroughly understand both your goals and the available tools. Build and run a model. By this time, you should have already determined everything that you need to complete your model. Explore and refine the results of your model. View, analyze, and symbolize your results to determine whether they are satisfactory. If necessary, you can run the model again with different tool inputs. Internal Sharing Maps and Layers with ArcGIS Pro ArcGIS Pro can use the following types of files: Project templates: Share a template that ensures that projects conform to standards and start with customizations such as custom basemaps and layouts. Layer file: Share a layer file that contains layer properties such as symbology and labeling. Data referenced by the layer must be accessible in order to use the layer file. Map file: Share a map file that contains a shell of what was saved in the map. Data referenced by the map layers must be accessible in order to use a map file. Layout file: Share a layout file that contains the page, layout elements, and maps referenced by map frames on the page. Data referenced by the layout must be accessible in order to use the layout file. Report file: Share a report file that contains the report document view, elements added, a reference to the layer or data source, and a reference to any subreports or supplemental pages. Task file: Share a task file that contains a set of preconfigured steps that guide users through a workflow or process. Data referenced by the task must be accessible in order to use the task file. Style file: Share a style file that contains symbols, colors schemes, label placements, and other style items to promote standardization. Share a map as a web map, or share selected layers from a map as web layers. Web maps are an interactive display made up of a collection of web layers. The following types of web layers can be shared: feature, tile, vector tile, map image, imagery, scene, and elevation. Web maps can be used with desktop applications and are best used for building applications such as dashboards or story maps. Web layers can be used to create new maps for visualization, editing, query, and analysis. Creating Optimized Routes Using ArcGIS Pro Costs Cost values are what make one route more desirable than another. There is always at least one cost when you perform a route analysis. The most commonly used cost is distance: a route will take a certain amount of time because of how far away the destination is. However, to model the world more accurately, there are additional costs that exist and should be represented, such as time. Not every road between the start and end points can be traversed at the same speeds, with the same amount of traffic, curves, or traffic lights. Sequencing The sequence of the stops along a route is important. The stops do not start out in any logical order when they are imported into your project; they are simply in the order in which they were added to Internal the feature class. ArcGIS Network Analyst provides multiple options for sequencing the stops along a route. Sequence type Definition Use Current The route will use the stops in the order in which they were created. Find Best The route is based on the most efficient way to visit all the stops. Preserve First The route will start at the first point or stop, but Stop every stop after that can be changed. Preserve Last The route will end at the last point or stop, but Stop every stop before that can be changed. Preserve First & The route will start at the first point or stop and Last Stop end at the last stop, but every other stop can be changed. Barriers The real world does not always match the model that you are using. In route analysis, the best route found might not be the best in reality, because there might be obstructions or barriers that have not been included in the model. These barriers could be permanent if the data has fallen out of date. This situation could involve roads that have had traffic lights added or that have been narrowed or rerouted. The barriers could also be temporary, including construction work being done on a road or a temporary road closure. Suitability Modeling: Creating a Weighted Suitability Model The development of any suitability model type is driven by an effective goal and criteria. The model goal defines the phenomenon, study area, and purpose of the analysis. The model criteria define the model variables and the values that are considered suitable for each variable. The suitability modeling process involves creating rasters to represent each model criterion, and then combining the rasters into a single surface that meets the model goal. Model criteria are represented by rasters that have a common suitability scale, which allows them to be combined to meet the model goal. The common scale varies by model type: For simple models, the scale is either 0 (not all criteria are met) or 1 (all criteria are met). Internal For fuzzy models, the scale ranges from 0 (low possibility of membership) to 1 (high possibility of membership). For weighted models, the scale ranges from 1 (lowest preference) to a user-defined value (highest preference). Weighted suitability model workflow When creating a weighted suitability model in ArcGIS Pro, you can complete the six-step workflow (as shown below) in nonlinear order. However, it is best practice to define the model goal and criteria first. In this course, you will learn how to perform the Transform, Weight and Combine, Locate, and Analyze steps of the workflow. 1. Define a. Define the goal, supporting criteria, and evaluation metrics for the weighted suitability model. 2. Derive a. Derive data that represents the model variables that are defined by the criteria. In this example, the criterion Far distances from streets defines distance from streets as a model variable. A raster that represents distance from streets is derived from street centerlines by using a geoprocessing tool. 3. Transform a. Transform the values in each derived dataset into a common suitability scale by assigning each cell in the surface a suitability score (value on the suitability scale). For each dataset, assign the highest suitability scores to the variable values that are most preferred according to the associated criterion. In this example, the distance- from-streets raster is transformed into a 1-to-5 suitability scale. To represent the criterion Far distances from streets, the locations closest to streets are assigned a value of 1 (lowest preference) and the locations farthest from streets are assigned a value of 5 (highest preference). Model variable Transformation Description data type approach Internal Discrete data with Categories A table is used to map each category that is represented in nominal or the derived surface to a score on the suitability scale. For ordinal values example, cells that are categorized as agricultural land use (x-value) are assigned a suitability score of 2 (y-value). Continuous data Ranges of A table is used to map ranges of values that are represented with interval or values in the derived surface to a score on the suitability scale. For ratio values example, an x-value of 12 meters falls within the range of 10 meters to 20 meters, so that cell is assigned a suitability score of 2 (y-value). Mathematical A mathematical function is used to transform the values functions that are represented in the derived surface to a score on the suitability scale. For example, an x-value of 12 meters is input into a linear function to calculate a suitability score of 2 (y-value). 4. Weight and Combine a. Weight and combine the transformed data, which represents the model criteria, into a single suitability surface that meets the model goal. In this example, three transformed rasters are combined to create the suitability surface. b. The application of weights enables you to consider the relative importance of each criterion to the overall goal. Determining weights for a specific application can be done in several ways and may require collaboration with subject matter experts. In ArcGIS Pro, weights can be applied as a multiplier or as a percentage. The values in the suitability surface are calculated as the weighted sum of the values in the transformed data. c. Weights as a multiplier i. The weight values can be any positive decimal or integer value that is greater than 1. By applying weights as a multiplier, the scale of the suitability surface will provide a higher contrast between the scores of the most suitable locations. d. Weights as a percentage i. The weights values are relative percentages that add up to 100. By applying weights as a percentage, the user-defined suitability scale is maintained. 5. Locate a. Locate the phenomenon by using the suitability surface. In this example, a region that has the highest average suitability is identified. Internal b. Suitability Score i. From weighted suitability model --> to locate suitable regions. Higher values = greater preference. Suitability score is referred as utility c. Region Shape i. Shapes used to represent the type of region being modeled can include circle, triangle, square, and ellipse. d. Shape vs. Utility i. The extent of each candidate region is determined by the trade-off between maintaining the specified geometry and maximizing utility (suitability scores). This trade-off between shape and utility is represented as a percentage: ii. 100% = when maintaining geometry only is most important iii. 0-100% = a mix of maintaining geometry and maximizing utility iv. 0% = extent solely based on suitability scores e. If multiple candidate regions, add additional criteria: i. Determine a specific number of regions ii. Determine min/max inter-region distance iii. Evaluation method (e.g. highest average value, highest sum, highest number…) iv. Define cost > creating cost raster > finding least-cost path (think of migration paths) 6. Analyze a. Analyze the result by visually evaluating the suitability surface and regions to ensure that the model goal has been met. Optionally, perform sensitivity and error analysis. Internal Evaluation Description approach Visually evaluate Visually evaluate the transformed datasets and final suitability surface to the results ensure that the results meet your goal and criteria. The results can be visually evaluated throughout the modeling process, as you have done in this course. Visit the site Visit the site to ensure that certain criteria are still met. For example, the source data may be outdated as a result of development or natural hazards, such as landslides or fire. Request review Request review from subject matter experts. Perform sensitivity Perform sensitivity analysis to explore the effect of altering a model assumption, analysis such as a transformation parameter or weight. Modifying a transformation parameter Modifying a weight Perform error Perform error analysis to understand the effects of geometry error or analysis measurement error on the results. Accounting for geometry error: The impact of error from a continuous source dataset can be analyzed by randomly adding and subtracting values within the error tolerance to cells within the dataset. Accounting for measurement error: The impact of error from a discrete source dataset can be evaluated by expanding, buffering, or offsetting the geometry. o Raster: A geoprocessing tool can be used to expand specified zones of a raster by a defined number of cells. For example, to account for expanded areas of urbanization, you may want to Expand GP the cells that are classified as developed in a land-cover dataset. o Vector: Editing tools can be used to buffer or offset a feature by a specified distance. For example, the error in a stream centerline may be ±10 feet. Create alternative Create alternative scenarios with different assumptions. For example, if the scenarios suitability model includes a criterion related to average annual rainfall, an alternative scenario that considers projected future average annual rainfall could be modeled. Internal Binary Mask - overlay used to represent study area, removing unsuitable areas from the analysis Geostatistical Interpolation: Introduction What is it? Two groupings of spatial interpolation techniques are used to calculate a prediction surface. Deterministic: Predictions are made at locations based on surrounding measured values and on specified mathematical formulas where the user defines the parameters. Geostatistical: Predictions are made at locations based on statistical models that describe the spatial configuration of the measured values. These models incorporate an element of randomness into the process, which helps create not only a prediction surface, but also an estimate of accuracy at each prediction location. All Geostatistical = Semivariance equation Kriging model = deterministic trends and random error Each prediction location uses the mean to make an initial guess, and the prediction is adjusted based on the values of local spatial neighbors. Spatial autocorrelation is a measure of the degree to which nearby locations are more similar in value than locations that are farther away. Internal The semivariance and covariance functions model the degree of spatial autocorrelation as a function of distance between pairs of points. For each bin (distance range), the average semivariance or covariance and average distance are calculated and then represented as the blue crosses in the semivariogram or covariance chart. Covariance does not apply to geostatistical methods that use empirical Bayesian kriging. However, all geostatistical interpolation methods, including empirical Bayesian kriging, can use the semivariance function. The model is a specific semivariogram or covariance function that is fit to the spatial structure of the empirical semivariance or covariance values and is then used to make the predictions. There are several functions that can be specified for the model, such as spherical, circular, and K-Bessel. The specific model types vary for each geostatistical interpolation method. It is important to pick a model type that best fits the spatial structure of the phenomenon (black crosses). Also, consider the trade-off of speed versus accuracy. The default model may have faster processing speeds, but it may produce less accurate results. K-Bessel is considered to be the most accurate and flexible model, but this method has slower processing speeds and may not be the best fit for the specific phenomenon. Internal Q: Which function is used to model the spatially autocorrelated random error component? A: Semivariogram model The y-intercept of the model is referred to as the nugget. The nugget is a measure of variability over very small distances. For example, if two points are located at nearly the same location, the values may vary slightly. The nugget is often attributed to measurement error. A key data assumption for geostatistical interpolation states that the random local-scale variations in the values of a phenomenon are spatially autocorrelated. In terms of the model chart, local scale is represented as a distance range. Pairs of sample points with distances less than the range have values that are spatially autocorrelated. It is important that the model is properly fitted within the range because these values contribute most to the predicted values. At distances greater than the range, the fit of the model is less critical. Internal The sill represents the semivariance or covariance value at the distance where spatial autocorrelation drops off. In terms of a model chart, the sill is the y-value of the model associated with the major range (x-value). The partial sill is calculated as the sill minus the nugget, which describes the maximum variance in the spatially autocorrelated values. The workflow for geostatistical interpolation consists of three steps: 1. Perform exploratory spatial data analysis (ESDA) a. Perform exploratory spatial data analysis (ESDA) to gain a deeper understanding of your data using maps and charts. By better understanding the phenomenon that you are investigating, you can make more informed decisions about how the interpolation model should be constructed. 2. Create a prediction surface. a. Create a prediction surface from sample points or polygon centroids using a geostatistical interpolation method. As you saw in the previous activity, this process consists of the following steps: i. The spatial structure of the sample points is modeled by fitting a function (semivariogram model) to the average empirical semivariances. Internal ii. For each prediction location, nearby sample points are identified as neighbors based on specified search criteria. iii. New values are predicted for a phenomenon using the semivariogram model and neighboring sample points. 3. Cross-validate the results. a. Cross-validate the results by comparing the predicted values to the measured values at the corresponding location to ensure that the prediction surface is valid. In some cases, you may compare your cross-validation results with the results of other spatial interpolation methods. Understanding data assumptions Continuous phenomenon is considered to be continuous if there is a value at every location in the study area. o This assumption is fundamental to all types of spatial interpolation, including geostatistical interpolation. To predict values at unmeasured locations, the phenomenon must be spatially continuous. Spatial autocorrelation is when the random local-scale variations in the values of a phenomenon are spatially autocorrelated. Spatial autocorrelation is the measure of the degree to which nearby features are more similar in value than features that are farther apart. o In geostatistical interpolation, this assumption is the basis of a semivariogram. A semivariogram is a chart that represents semivariance (similarity in value) on the y- axis and distance on the x-axis. Stationarity is the statistical properties of the data, such as mean and variance, and the relationship between the data values at two locations depends only on the distance between them, not their exact location. In other words, the correlation between data at any two locations is explained by the distance between them. Data that meets these requirements is considered to be stationary. o In terms of geostatistical interpolation, this assumption is required for estimating the true semivariogram. Geostatistical interpolation methods that use empirical Bayesian kriging can account for moderate nonstationarity in the data. Normal distribution is when the probability density function of the measured values follows a bell-shaped or Gaussian curve, which indicates the data is normally distributed. For normally distributed data, approximately 95 percent of the values fall within two standard deviations of the mean. o In geostatistical interpolation, margins of error can be constructed for the predicted values if the measured values follow a normal distribution. Internal o No spatial clustering assumes that the data is randomly or regularly spaced, not spatially clustered. o In geostatistical interpolation, if you preferentially sample in one area where spatial autocorrelation is present, then the sample points may be clustered and have similar high or low values. A histogram created from these sample point values may not be a true representation of the histogram of the population. Global trends assume that there is some fixed deterministic trend in the measured data that describes large-scale variation in the phenomenon. o Modeling this deterministic trend may be a part of the geostatistical interpolation process. Charting data distributions Histogram (chart) - visualize the frequency distribution of sample values; x-axis = minimum to max values, y-axis = count, bins = equal interval ranges o In Pro, can plot mean, median, stdev, normal distribution reference line o In Pro, can select a bin => select all the points in the bin within the map Histogram (stats) - to determine is data is normally distributed o i.e. Mean ~ median o Skewness (symmetry) is ~ 0 o Kurtosis (shape around its tails) is ~ 3 using the Kurtosis formula Internal QQ plots (Quantile-quantile) - compare distribution of quantiles of a numeric variable to a normal distribution o If the distributions of the compared quantiles are identical, then the plotted points will form a straight 45-degree line. The farther the plotted points deviate from the reference line on a normal QQ plot, the more the distribution deviates from the normal distribution. Methods 2D points: kriging and empirical Bayesian kriging can be used o EBK - accounts for uncertainty in semivariogram model, uses subsets for non- stationarity, more accurate and standard errors of prediction than regular Kriging 3D points: EBK 3D can be used o EBK creates 3D prediction surface using dataset with x, y, elevation and measured values Internal Polygons (= areal interpolation): used to reaggregate source polygon to target polygon; to fill in missing polygon data o Areal interpolation uses kriging theory to create 2D prediction surface then apply it to a target polygon Choosing statistical interpolation methods Exploratory interpolation - to generate various interpolation results, and then compare and rank those results using customizable criteria based on cross-validation statistics. Each interpolation result receives a ranking and a summary of the cross-validation statistic used. A single criterion can be used for comparing interpolation results that are known to be stable and consistent, or for choosing between results that are all very similar. A single criterion can rank results by highest prediction accuracy, lowest bias, lowest worst-case error, highest standard error accuracy, or highest precision. Hierarchical sorting uses multiple criteria and takes into account the relative differences of the cross-validation statistics. The interpolation results are ranked, and multiple criteria are specified in priority order. Weighted average rank always uses all criteria in the comparison and allows for preferences of some criteria over others. The interpolation results are ranked independently by each criteria, and a weighted average of the ranks is used to determine the final ranks. Creating 3D Features from Existing Data DEM: A digital elevation model (DEM) is a raster data type in which each cell represents a continuous surface, usually referencing the elevation of the surface of the earth. In some parts of the world, DEMs are synonymous with digital terrain models, or DTMs. DEMs are best used for modeling the bare earth without features such as buildings or trees. DSM: A digital surface model (DSM) is a raster data type in which each cell represents the elevation for both natural and built features, such as trees and the tops of buildings, on the earth's surface. DSM rasters can be useful for 3D modeling in industries such as urban planning, telecommunication, and aviation. DHM: A digital height model (DHM), or normalized digital surface model (nDSM), is a raster data type in which each cell represents the height, not the elevation, of features at each location. Created by subtracting a DEM raster from a DSM raster, a DHM can be used for estimating the height of features, such as buildings or trees. = DSM - DEM Internal The nearest neighbor interpolation method obtains elevation values by measuring the distance between the vertex of a feature and its nearest raster cell center. In other words, the closest raster cell center to a vertex will be used to interpolate the vertex's z-value. The bilinear interpolation method determines the value of the new 3D vertex by using the four nearest raster cell values. Using a weighted measurement, if the vertex is closer to the center of the cell, the weight of the cell value will be higher. The further the vertex is from the center, the weight of the surrounding cell values will be higher. 3D model type File extension ArcGIS CityEngine rule packages.rpk COLLADA.dae Wavefront OBJ.obj Building information modeling (BIM).rvt or.ifc 3D Studio Max.3ds Autodesk Filmbox.fbj OpenFlight.flt VRML and GeoVRML.wrl GL Transmission Format.glTF Binary GL Transmission Format.glb Method Technique Source Reference Reference BIM files Revit directly in ArcGIS Pro IFC without the need for Internal conversion to a native ArcGIS 3D feature Convert Add a model from a COLLADA template gallery Wavefront OBJ Create a 3D object 3D Studio Max feature class Autodesk Filmbox Import a 3D model file as a multipatch OpenFlight feature class VRML and GeoVRML Replace an existing 3D feature class with a 3D GL Transmission Format model Binary GL Transmission Format Distance Analysis Introduction to Distance Analysis Name Description Data When to use model Source The position of Raster or Always use at least one source location as location(s) location(s) from vector the reference to measure distance from. which distances are calculated Destination The position of the Raster or Use if your analysis requires path generation. location(s) locations(s) to vector which paths are generated from the source locations(s) Barriers The parts of the Raster or Use to account for movement around landscape that vector impassable areas. cannot be traversed Internal Digital A raster in which Raster Use if the distance analysis considers elevation each cell indicates surface distance; other inputs may also be model the elevation at that derived from a DEM. (DEM) cell location Slope A raster in which Raster Use when accounting for the effect of terrain each cell indicates steepness on movement. the terrain's steepness at that cell location; these values are typically derived from a DEM Land cover Categorical data Raster or Use when accounting for the effect of surface describing the vector material on movement. physical material at the earth's surface Costs can include: slope, land cover, money Method Example output Description Example usage Distance Determines how difficult it Identify which accumulation is to reach the closest settlements are source feature from every too far away from a location in a study area. source of clean water. Distance Divides an area into Predict the allocation regions around a set of territories of a set source features. Each of red foxes, based region indicates the on the location of easiest-to-reach source their dens. feature. Internal Path Determines the optimal Plan the best route generation path(s) connecting two or for a natural gas more locations. pipeline. Distance Analysis: Using Distance Accumulation and Distance Allocation Methods for calculating distances Distance accumulation calculates the difficulty of travel from every location in a study area to the easiest-to-reach source. In this graphic, reaching a source from the green areas is easier, while reaching a source from the light-pink areas is more difficult. Distance allocation divides a study area into regions that share the same easiest-to-reach source. In this graphic, the green region represents the areas where Source 1 is the easiest to reach and the purple region represents the areas where Source 2 is the easiest to reach. The corridor method is useful for determining which areas will contain the easier-to- traverse routes between two sources. In this graphic, all of the easier-to-traverse routes are contained in the dark-green area. Routes that go outside that area will be more difficult to traverse. The optimal path method calculates the single set of easiest-to-traverse paths connecting sources and destinations. In this graphic, the blue lines represent the set of easiest-to- traverse paths connecting the source location with Destination 1 and Destination 2. Unlike the corridor method, the optimal path method can find routes between more than two locations. Benefits of calculating distances: Barriers Surface distance Internal Cost surface o accounts for how difficult it is to travel across an area, regardless of the direction of travel. Land cover is a good example of this type of impedance. Horizontal factors o sources of impedance that affect the rate of travel differently depending on the azimuthal (north-south) angle of travel. o varies in its effect depending on the azimuthal (north-south) angle of travel. Wind and current are good examples of horizontal factors. If the wind is blowing from the north, traveling north will typically be more difficult than traveling south. o e.g. wind resistance Vertical factors o sources of impedance that affect the rate of travel differently depending on the altitudinal (up-down) angle of travel. o varies in its effect depending on the altitudinal (up-down) angle of travel. Slope is a good example of a vertical factor. Traveling up a steeply angled slope is typically more difficult than traveling up a shallow angled slope. o e.g. moving uphill Source Description Example characteristic Initial A traveler's fixed cost Firefighters may need time to gather their equipment accumulation in time or effort before and prepare their vehicle before they can start traveling. travel can begin Maximum The total distance that A gas-powered all-terrain vehicle (ATV) can hold only a accumulation a traveler can move fixed amount of fuel. When the fuel runs out, the ATV's before stopping rate of travel is zero. Multiplier A measure that A bear moving with two small cubs may move more accounts for the fact slowly than a bear without cubs. that different travelers may move at different speeds Internal Introduction to Arcade Creating Arcade Expressions ArcGIS Arcade is an expression language that can be used to create custom content in applications across the ArcGIS system. It enables you to pull data from layers and features and create new values, which can be used to perform a wide range of tasks. These values can consist of various data types—including arrays, Booleans, dates, dictionaries, geometries, numbers, and text—and can be used across several execution environments. Arcade was initially built to create custom labels, symbolization, and pop-ups, but it has been developed over time to include a significantly wider range of capabilities. It can now be used in ArcGIS applications, such as ArcGIS Dashboards and ArcGIS Field Maps, to format elements and create conditional rules to control form visibility. Arcade can also be employed in other environments to create attribute rules, calculate new field values, evaluate feature constraints, perform geoanalytics, and configure geotriggers, among many other uses. Arcade is used throughout ArcGIS, and many expressions can be used interchangeably between different applications and devices. For instance, the expression that you create to generate a custom label in ArcGIS Pro can also be used to create a similar label in ArcGIS Online and ArcGIS Enterprise. In this course, you will learn about three of the most common workflows with Arcade expressions: formatting complex labels, creating informative pop-ups, and visualizing features based on Arcade expressions. Arcade is a lightweight expression language; expressions are evaluated, and a resulting value is generated for each expression. These expressions work similarly to calculations in a spreadsheet, where a formula or script is used to generate a value that will appear in a field. Arcade expressions in ArcGIS calculate new values on the fly using attribute data or geographic data. Arcade expressions, whether single-line expressions or complex calculations, must return a value. Workflow showing how Arcade inputs create an expression that is evaluated and a result created. Expression --> Evaluation --> Results Arcade is a lightweight expression language where inputs create an expression that is evaluated, followed by the creation of a result. Because Arcade expressions can consider the geographic location in addition to the information contained within each input, the Arcade-derived field can be created through a spatial operation, such as Intersect. Although Arcade can be used in field calculations, these derived attributes do not have to be permanent and can be created within the application using the Arcade expression. Internal The Arcade expression language in ArcGIS can be used for many purposes. Arcade was designed to support creating custom visualizations, labeling expressions, and pop-ups in applications across the ArcGIS system. Arcade profiles define the execution environment where Arcade expressions are supported, specify the function bundles available for use within the expression, and provide profile variables to aid in data access. In ArcGIS Online and ArcGIS Pro, Arcade expressions are created and saved using the Arcade editor. Expressions can also be used as input parameters for geoprocessing or large data and analytics tools. The ArcGIS Arcade Playground is a web-based application that provides a test environment for running Arcade expressions. Arcade expressions can be used in many ArcGIS applications, including ArcGIS Maps SDK for JavaScript web apps and ArcGIS Dashboards. Within the Arcade editor in Map Viewer, there are a few tools to use when encountering errors in your expression. The two primary troubleshooting tools are the Run button and the Console function. When creating an expression, the Run button is available to check the expression and return a value if successful. As an expression is created, the Run button can provide information on the potential output of the expression. The Console function can be used within an expression to log messages for debugging purposes. These messages can include intermediate processing values to see how the expression progresses. Using Arcade Expressions in ArcGIS Within each Arcade expression, several language features are available to build the desired expression. The primary components are organized into five groups: Functions Function bundles Internal Variables Constants Iterators Functions Arcade functions are preset processes that manipulate the value in specific ways according to the inputs. These functions can include data manipulation and logical or tracking processes. Function bundles Function bundles are sets of functions that can be included in the implementation of a profile. All profiles are able to use functions within the Core bundle, which includes all constants, array functions, debugging functions, feature functions, math functions, text functions, and others. However, some profiles are also able to use other function bundles, such as data access, database, geometry, portal access, and track bundles. Variables Variables are references to types of data that can be used in more complex expressions. Variables can have a local or global scope, depending on where they are recognized in relation to the function. Variables with a local scope are defined within a function and can override a variable with a global scope. Variables with a global scope are recognized outside the function. Constants Specific values that are immutable are provided in the Arcade editor for convenience when creating an expression. Pi and Infinity are provided, as well as some text manipulation values that can alter text in the expression in specific ways. Iterators Iterators include if-else conditional statements, as well as for loops and while loops. This component enables the implementation of logic within an expression. Iterators allow the expression to return specific results depending on in FeatureSet A FeatureSet is a collection of features that are accessed from a map layer or feature layer. FeatureSets allow you to access features from feature service layers within the map or feature service. Internal Python Python is a free and open-sourced scripting language that is commonly used to automate tasks. It uses a code structure, referred to as syntax, that is easy to read and understand. Syntax helps beginners to learn the language and helps professionals to minimize development and maintenance costs. Python supports third-party modules and site packages, so you can integrate a vast number of tools and programs. Debugging scripts is simple in Python because it has a built-in process to find and report errors. Python allows you to perform and automate tasks through a single script. A Python script is a sequence of instructions. The structure and arrangement of the instructions are referred to as syntax. Python instructions use various language elements, including data types, statements, and functions. If you think of a Python script as a recipe, the data types are the recipe's ingredients, and the statements and functions are the recipe's instructions. You need ingredients and instructions for a recipe, just as you need data types, statements, and functions for a script. A Python script can be compared to a cooking recipe, with data types being represented as ingredients, and statements and functions being represented by instructions With all the programming languages available, you may wonder which to choose. Python provides many benefits, including low cost, clean syntax, various scripting environments, high scalability, and ever-progressing development. Cost: Python is free to use and develop. Because Python is free, entry-level programmers can easily learn and use it. For professional programmers and organizations, cost is a major consideration, and a free programming language helps keep production and development costs at a minimum. Python syntax is simple and elegant, which makes it easy to learn, read, and understand. That is a benefit for new users as they learn Python, as well as for more advanced users as they work with more complex code. Python scripts can be written in basic text editors such as Notepad. However, a variety of other scripting environments offer more functionality to write your code. These alternatives provide benefits such as code completion, formatting, custom run environments, and debugging tools. Some of these scripting environments exist as stand-alone applications, Internal while other applications like ArcGIS Pro have integrated Python environments like ArcGIS Notebooks and the Python window. Python is highly scalable. It can be run as small snippets of code to test functionality, written as simple scripts to accomplish small-scale tasks, or developed into complex projects requiring development teams. Due to its popularity and power as a programming language, Python has an active and robust development community. The Python programming language and the countless available third-party libraries are constantly in development. As a result, processes become more efficient and refined, functionality expands, and Python's potential as a programming language continues to grow over time. Python is a programming language that is composed of modules. Modules are collections of related code that contain variables, definitions, and instructions. These modules are organized into packages, which can be further grouped into libraries. Python comes out of the box with a standard library that supports common programming tasks, such as reading and writing files, searching text using regular expressions, performing mathematical calculations, interacting with the operating system, navigating networks, connecting to web servers, creating logical workflows, and automating processes. Core capabilities of the Python programming language can also be extended by importing third- party libraries. These libraries are typically specialized and enable you to perform higher-level tasks related to those specialties. For example, Pandas enables you to work with tabular data, NumPy and SciPy grant the ability to make advanced mathematical and scientific computations, Matplotlib allows you to graphically visualize data, and ArcPy and ArcGIS API for Python enable you to work with and manage spatial data. Python capabilities can be extended by importing specialized libraries and modules. Popular options for use in GIS include ArcPy, Pandas, NumPy, ArcGIS API for Python, Matplotlib, and SciPy. Script Environments There are various types of environments to consider when creating and using Python scripts; some of these environments are related to software configurations while others are more conceptual in nature. Three conceptual environments for Python development include scripting, run, and storage environments. Depending on the available software and your project needs, these environments can be integrated, separate, or approached in a hybrid manner. It is important to understand each conceptual environment before you begin to create and use scripts. Python workflows are written as code within scripting environments. You can often configure these environments to maintain specific Python versions, include required Python libraries, control formatting, and assist with debugging and troubleshooting. Run environments are where Python instructions are executed. In many cases, Python run environments are bundled with scripting environments to enable ease of use and to ensure continuity with environment attributes. Internal Python scripts can be stored as stand-alone.py files, which can be interpreted by most scripting and run environments. Alternative storage methods can exist within integrated systems, however. For example, in ArcGIS Pro, you can store Python scripts as script tools and notebooks. There are several scripting environments available for you to create and use Python scripts. While many tasks can be accomplished using a range of scripting environments, one environment may be better suited to your project needs. Integrated development and learning environment (IDLE) IDLE is typically bundled and downloaded with Python by default. Because it comes bundled with a specific version of Python, IDLE is pre-configured for the version of Python that you downloaded. IDLE uses an interactive method to write and run code, which means that the code is automatically run when you press Enter after each line or block of code. IDLE is a good starting place to write and test snippets of code because it is easy to use, and its interactive scripting approach enables you to quickly run code and see the results. Integrated development environment (IDE) An IDE is a software application that enables you to write, run, and debug Python code. Some examples of popular IDEs are VSCode, PyCharm, and Notepad++. IDEs assist with writing clean code by providing functionality such as syntax formatting (which uniformly formats your code) and code completion (which autocompletes code based on what you are typing). IDEs enable you to designate which version of Python to use, and you can even create virtual environments to run different versions of Python at the same time. IDEs provide a seamless scripting, running, and debugging experience because you are able to write and run code within the application, and you see feedback, including any errors, in a results window. Within Pro ArcGIS Pro includes two integrated scripting environments: the ArcGIS Pro Python window and ArcGIS Notebooks. While you can use IDLE and IDEs to develop Python scripts for use in ArcGIS, ArcGIS Pro's integrated scripting environments improve scripting, run, and storage workflows for projects that include ArcGIS functionality. The Python window and ArcGIS Notebooks enable you to interact with project items, map content, and geoprocessing tools, and you can add geoprocessing tool outputs directly to your maps. These integrated scripting environments allow you to run Python tasks that access ArcGIS Pro's geoprocessing environment settings. You can also control Python functionality through an integrated Package Manager, which grants access to core Python functionality from standard libraries, Esri's ArcPy and ArcGIS API for Python, as well as any installed third-party libraries. Python window The Python window is split into two sections: the transcript and the Python prompt. Python code is entered in the prompt section, and, when executed, the code moves to the transcript section. As the code runs, any messages or errors are also shown in the transcript section. Python code is written and run in an interactive manner that is similar to IDLE. When you press Enter after a line or Internal block of code, the code automatically executes, enabling you to quickly write and test snippets of code. In addition to writing and running code, you are also able to load code from another location and save code for later use in the Python window or in other applications. To load code, right-click the Python prompt section and choose Load Code. To save code, right-click the transcript section and choose Save Transcript. Python window structure in ArcGIS Pro. The Python window is docked at the bottom of the ArcGIS Pro interface. It contains a prompt section, where you write or import code, and a transcript section, where code outputs and executed code are displayed. ArcGIS Notebooks ArcGIS Notebooks is built on top of the Jupyter Notebooks framework. Python scripts are written and run as a series of compartmentalized cells, which means that a workflow can be divided into smaller components rather than be developed as one large script. This structure grants you a larger degree of control when writing, running, and debugging scripts, and it can improve your ability to share workflows with others. You can add text as comments before or after each cell within a notebook, and these comments can function as a guided narrative for anyone running the script. In addition to its structure and its integration with ArcGIS Pro, ArcGIS Notebooks also provides other benefits over other scripting environments: You can develop more complex Python scripts because code is not written and run in an interactive manner. You can generate dynamic output, such as graphics, to assist with verifying and analyzing output. You can write cleaner code with the assistance of syntax formatting and code completion. You can save Python code to the project as a notebook or export it as a.py or.html file. ArcGIS Notebooks is docked to the view pane of the ArcGIS Pro interface. A notebook is configured as a series of cells, which enables you to write and run compartmentalized components of a script. Task to accomplish IDLE IDE Python window in ArcGIS Notebooks ArcGIS Pro Write basic code x x x x Run basic code x x x x Work with integrated tools to x x write clean code and check syntax Internal Perform error handling in an x x integrated environment Implement geoprocessing x x workflows and visually verify results Generate dynamic output x while running scripts Which two statements describe an IDE? It is a highly configurable scripting environment. It includes features like code completion and syntax formatting. Python scripts are composed of three different types of components: data types, statements, and functions. Each component type needs to be included within a script for it to be able to complete a given task. The sample Python script that follows contains a workflow that finds the square root for a set of given values. After you review the sample script, read through the descriptions of data types, statements, and functions to understand how these components are structured and used in Python. #Import modules import math #Assign variables to numbers a = -10 b = 0 c = 10 #Find the square root of numbers numbers = [a, b, c] squareRoot = [] for number in numbers: if number >= 0: squareRoot.append(math.sqrt(number)) else: pass print(squareRoot) Data types Data types are the components that contain data for your Python script to act upon. Common data types include literals, variables, lists, tuples, and dictionaries, but other types of data can be used as well. Data can be explicitly defined within a script, such as with a variable that is assigned to a specific number, or it can be referenced from a source outside of your script, such as a list of numbers pulled from a spreadsheet. The sample Python script above contains several data types: Variables include the letters a, b, and c, as well as words like number, numbers, and squareRoot. Literals include the numbers -10, 0, and 10. Lists include [-10, 0, 10] and []. Statements Internal Statements are components that perform an action but do not return a value. Statements can help to import modules, assign variables to values, create automation loops, and form logical if-then decision-making structures. The sample Python script above contains several statements: Importing math module functionality with the import math statement Assigning variables to numbers and lists , such as with a = -10 and numbers = [a, b, c] Creating a for-in loop with for number in numbers: Using a logical structure, such as if number >= 0: Functions Functions are components that both perform an action and return a value. Python includes a variety of built-in functions, such as pow(), str(), and print(), which return the power of a number, a text version of a number, and a printout of results. Python can also use functions from imported modules, such as the square root function from the math module (math.sqrt()), as well as user- defined functions to perform custom-made operations. The sample Python script above contains a few functions: Returning the square root of a number with math.sqrt(number) Printing the squareRoot list output with print(squareRoot) Literals Literals are the actual values, such as numbers and text strings, that your Python script uses when performing tasks. Two common types of literals are numerical literals and string literals, where numerical literals are number values and string literals are text. Python uses quotation marks to differentiate between numerical and string literals. In the following graphic, the string literal Italy is represented in Python as "Italy". Numbers can be used as either numerical or string literals. As an example, look at the ID value of 1. While 1 has a numerical value, its function as an identifier means that it does not need number functionality, such as addition or multiplication. To represent a number as a string literal, enclose it in quotation marks: "1". A string is a collection of one or more characters. Each character in a string has an index that is based off of its numerical position within the string. Indexes start with 0 at the first position and increase from left to right. Indexes can be used to return a character from a string, among other uses. Here is an example of an index that is used to return the third character, counting from 0, in the Italy string: >>> Country = "Italy" >>> Country 'a' Internal A variable is a value that is used to represent a literal. It is similar to shorthand in the sense that you are creating a name to represent a literal value. Variables store the information of the literals that they represent, which means you can use the variable within the Python code of your script. In the following graphic, Country is the variable for Italy. If a Python script requested the value of Country, Python would return Italy. >>> Country = "Italy" >>> Country 'Italy' Variables Variables enable you to write clean and condensed code, and they also improve workflows to maintain and update code. When you need to edit a script to use a new value, you only need to replace the value that is assigned to the variable instead of having to replace all instances of the original value. In the following example, you could reassign the Country variable to a value that is different from Italy, and it would automatically change the concatenation using that new value. >>> Country = "France" >>> "The country is named" + Country 'The country is named France' List A list is a compound data type that is used to store a sequence of values. Lists are written as comma-separated values enclosed between two square brackets. In the following graphic, the AttributeList variable represents a list of literals [1, "Italy", 12000, 40000000]. Because literals and their variables are interchangeable, AttributeList can also be written as a list of variables [ID, Country, Pop, Area]. Similar to a string, a list is a collection of values that are indexed based off their numerical position within the list. You can write code to execute a task on all the values in a list, or you can use the index to return a single value or a range of values based on their position in the list. In the example, a list index is used to return the fourth item in the AttributeList variable: >>> AttributeList = [1, "Italy", 12000, 40000000] >>> AttributeList 40000000 Tuple A tuple is a compound data type that stores a sequence of values that is similar to a list. Tuples are also written as comma-separated values but can be distinguished from a list because the values are enclosed between parentheses instead of square brackets. Besides its appearance, the difference between a tuple and a list is that a tuple is immutable— meaning that you cannot change the values or the sequence. As such, a tuple is useful to ensure the integrity of a sequence of values throughout a Python script. Similar to strings and lists, a tuple is indexed and can have its values similarly referenced based on this index. Because the index starts at 0, a reference to index 3 would represent the fourth value in the tuple (40000000). Internal >>> AttributeTuple = (1, "Italy", 12000, 40000000) >>> AttributeTuple 40000000 Dictionary A dictionary is a compound data type that stores pairs of items as a key and an associated value. A dictionary key is enclosed in quotation marks, and the key is separated from its value with a colon. Dictionaries are written as comma-separated item pairs that are enclosed in curly brackets. While dictionaries are not indexed like lists and tuples, you can return a value from a dictionary by looking up its key. Think of a dictionary like an address book: you can look up someone's name (key) and get their associated information (value). In the following example, when you look up Area (key), you get 40000000 (its value). >>> AttributeDic= {"ID": 1, "Country": "Italy", "Pop": 12000, "Area": 40000000} >>> AttributeDic["Area"] 40000000 Automation Automation in Python enables you to perform actions multiple times without the need to write repetitive code. Instead of writing multiple lines of code to perform an action multiple times, you only need to write a small amount of Python code to cover all of the repetitive actions. Looping statements allow you to automate the same task for several items at the same time. For example, the following Python script uses a for-in looping statement to replace feet with meters for each of the area items in the AreaList variable. AreaList = ["40000000 feet", "23908 feet", "1200000000 feet"] for area in AreaList: area.replace("feet", "meters") The for-in looping statement, also referred to as the for loop, begins with the for statement and then a user-created variable (area) that represents each item in the list. The for loop automatically assigns a value to this variable, so it does not need to be assigned beforehand. The user-created variable is followed by the in statement and the intended list (AreaList). The for loop always ends with a colon, which indicates that you want to perform an action for each item in the list. The indented line of code indicates that this action will loop through the list and execute for each individual item. Lines of code that are not indented will not be included in the loop. Conditions The if-else statement allows you to perform tasks based on a specific logic condition. For example, the following Python script will capitalize and center the Country variable if its value is equal to italy. Country = "italy" if Country == "italy": Country.capitalize() Country.center() else: print ("This country will not be capitalized.") The if-else statement begins with the if statement and is followed by the condition. The colon indicates that the next indented line of code will execute on the variable only when it meets the condition. The example shows an equality condition, which means that the Country variable has to be equal (==) to the value italy. Internal If the variable is not equal to italy, then the indented actions under the else statement are not executed. To include several conditions, use elif statements, and then end with the else statement. Country = "france" if Country == "italy": print ("This country is Italy") elif Country == "france": print ("This country is France.") else: print ("This country cannot be verified") Other statements The following table shows a list of commonly used statements. This list includes previously mentioned statements, along with a few other examples. Statement Purpose import Loads a module to memory del Removes an object from memory pass Used as a placeholder or to indicate that no action is taken for-in Automates tasks in a script if-else Adds conditions to a script try-except Handles script errors Functions and statements both perform actions. However, while statements tend to control the flow of a script or extend script functionality, only functions can return a value. If you need to return a value in your Python script, you will likely need to use a function in your code. Function syntax Function syntax begins with the name of the function and is followed by its parameters, which are enclosed in parentheses. In the following example, pow is the function; the number to convert (Area value) and the conversion power (2) are its parameters. The returned value is the value of 40,000,000 raised to the second power (1600000000000000). The L at the end of the value indicates that it is a long integer. Built-in functions Python provides many built-in functions that are ready to be used when you start writing a script. A few commonly used built-in functions are shown here. For a complete list, see the Python documentation Internal len() returns the length of the object >>> attributeList = [ID, Country, Pop, Area] >>> len(attributeList) 4 open(, ) opens a file in Read, Write, or Append mode >>> open("C:\Users\Desktop\attrInfo.txt", "r") max() returns the largest value in a tuple or list >>> areaTuple = (40000000, 23908, 1200000000) >>> max(areaTuple) 1200000000 Methods Functions that are associated with a specific data type or object are referred to as methods. Like other functions, methods perform an action, but the action depends on the data type that it is associated with. For example, a string method like capitalize will capitalize the string value that it is associated with (attached to). String method: capitalize >>> Country = 'italy' >>> Country.capitalize() 'Italy' Notice that the method capitalizes the string value (italy), not the variable name (Country). A string method like capitalize does not work on other data types, such as a list. A list is not a string; it is a sequence. A list has different methods that complete tasks for sequences. For example, you can count the number of times that a string or a number occurs within a list. List method: count >>> CountryList = ["Italy", "Germany", "China", "Germany"] >>> CountryList.count("Germany") 2 A method's syntax differs from other functions because it begins with the method's associated data type. For example, the count method begins with the intended list, CountryList. The data type is followed by a period, the method name, and then the method's parameters within parentheses. Importing functions More functions are available by importing modules. Think of a module as a container of functions that have the same theme. For example, the math module has functions that complete math- related actions. Modules are not loaded into Python upon startup, but an import statement loads a module to memory and makes its functions available in Python. The math module provides more functions that can be used in a script. You can use the built-in help function to find a list of available modules. >>> help ("modules") ArcGIS Pro is able to support core Python functionality from Python's standard library, as well as a range of extended capabilities that are available from third-party libraries and site packages. Esri has developed two specialized language options to support GIS workflows: a library named ArcGIS API for Python, and a site package named ArcPy. Internal ArcGIS API for Python ArcGIS API for Python is a Python library that provides a representation of your GIS as a collection of objects and relationships that Python can interpret. This representation can accommodate ArcGIS Online and ArcGIS Enterprise systems, and it enables you to interact with various elements of the ArcGIS information model. ArcGIS API for Python supplies tools to perform GIS administration tasks, as well as to manage, analyze, and visualize spatial data. ArcPy ArcPy is a Python site package that enables you to use core ArcGIS functionality in Python. ArcPy allows you to access all available geoprocessing tools, including extensions, and also includes a collection of functions and classes to use with GIS data. You can use ArcPy to analyze geographic data, convert data to different forms, manage spatial and nonspatial data, automate map production, and otherwise enhance GIS workflows. Internal