Summary

These notes provide an introduction to Business Process Management (BPM), including the components of a business process, types of business processes, and process identification. The material also covers the history of BPM and different approaches to engaging in BPM.

Full Transcript

BPM – Business Process Management Chapter 1 – Introduction to BPM Business Process Management consists of overseeing how work is performed in an organization to ensure consistent outcomes and to take advantage of improvement opportunities. The term “improvement” may take different meanings dependi...

BPM – Business Process Management Chapter 1 – Introduction to BPM Business Process Management consists of overseeing how work is performed in an organization to ensure consistent outcomes and to take advantage of improvement opportunities. The term “improvement” may take different meanings depending on the objectives of the organization. A Business Process consists of the following parts: Events – Things that occur automatically (no intervention; no duration) Activities – Do not occur automatically (require intervention), are consisted of many steps and take time Task – A very simple activity that can be seen as one single unit of work Decision Points – A decision that affects the execution of a process is taken affecting the outcome of the process Actors – Entities that range from individuals, to organizations, information systems, physical objects Outcomes – May be multiple. Characterized as either positive (ideally) or negative (avoidable), depending if the process added value to the actor(s) involved or not Take Note: Regarding the actors involved in a business process, the one who consumes the final output of the process is of special importance and is known as the customer. Clients may be internal or external to an organization.A business process may have several customers. Business Process – is a complete set of activities end-to-end that creates value for the customer. It’s also a collection of interrelated events, activities and decision points that involve actors and objects, and that collectively lead to an outcome that is of value to at least one customer. Therefore, business processes are the basic unit of business value in a company/organization. Types of Business Processes Typical organization has: many departments lots of hierarchical levels different roles and jobs What is the purpose >To create value for a customer Origins and History of BPM In order to better understand the popularity and usefulness of BPM, it is worth looking at its main drivers from a historical perspective. Its key drivers are the advent of the functional organization, the process thinking and the business process re-engineering (BPR). The functional organization A functional organization has a hierarchical structure wherein people work by their area of specialization. They are supervised by a functional manager with expertise in the same field, who, nevertheless, are not necessarily experts in carrying out the processes they oversee. Why BPM? Organizations nowadays rely on many departments, lots of hierarchical levels and different roles and jobs. What is the purpose? To create value for a customer. Organizational units are more important than processes and projects. Organizational units’ performance is optimized. A significant part of BPM is the process modelling. It is used in every phase of the business processes’ lifecycle. Functional organization - has a hierarchical structure wherein people work by their area of specialization. How to engage in Business Process Management? Continuous Process Improvement (CPI) Does not put into question the current process structure Seeks to identify issues and resolve them incrementally, one step at a time and one fix at a time Business Process Re-Engineering (BPR) Puts into question the fundamental assumptions and principles of the existing process structure Aims to achieve breakthrough, for example by removing costly tasks that do not directly add value Fails due to: 1. Concept misuse; 2. Over-radicalism; 3. Support immaturity. BPM, contrarily to BPR, focuses on the entire lifecycle of business processes: 1. Process Identification 2. Process Discovery 3. Process Analysis 4. Process Redesign 5. Process Implementation 6. Process Monitoring and Controlling Stakeholders in the BPM Lifecycle Chapter 2 – Process Identification Process Identification (PI) is intended to identify the firms’ business processes and establish clear criteria and rules for prioritizing them. To model every existing process is hard and to perform BPM is not free so the initiatives need to focus on key processes. Output: Process architecture that should represent the organization’s business processes and how they relate with each other. PI is separated in two different stages: 1. Designation a. Enumerate main processes b. Determine process scope 2. Evaluation a. Prioritize processes based on: i. Importance ii. Health iii. Feasibility Designation Phase aims to enumerate every existing process within an organization. One of the main difficulties in this phase is related to the fact that there’s many perspectives on how to categorize business processes within an organization. It answers the question: “What processes are executed in the organization?”. Some Perspectives: Geary Rummler’s Clarification of Processes – Sell; Deliver; Making sure you have things to sell and deliver. Porter’s Value Chain Model – Core Processes, Support Processes, Management Processes. ○ - Management processes provide direction, rules and practices ○ - Core processes generate value as they are directly linked to external customers ○ - Support processes provide resources to be used by other processes Process Identification There are several languages to process modelling. Some of the most popular are Event Driven Process Chains (EPCs); Data-flow diagrams; IDEF3; Business Process Model and Notation (BPMN), and BPMN 2.0. Process Analysis Process analysis - analysis of the as-is process and assessment of its performance’s potential issues and space for improvement A process analyst needs to: define the suitable performance measures mentioned earlier quantify them in the as-is process take the appropriate actions to solve possible issues 1. Qualitative analysis Value-added analysis Root-cause analysis PICK charts Issue register 2. Quantitative Analysis Quantitative flow analysis Queuing analysis Process simulation Process Implementation Process Monitoring and Controlling How many Processes is advisable? It always depends on the organization’s specific objectives of BPM. For most organizations, it varies from a dozen to a couple of dozens (12 to 24 ideally). Despite being important to identify how many processes occur it is also important to be aware of the relationships across the multiple processes. It is, of course, a subjective evaluation but it is important to understand how the performance of a process is related with another’s. Impact vs. Manageability The number of processes defined is a very important issue in the designation phase. To balance the advantages and disadvantages of a large process scope, Davenport defined: Broad Processes – Identified in areas where an organization feels it is important to completely renovate the existing operations at some point. Narrow Processes – Are not targeted for major renovations. They need to be actively monitored and are subjected to continuous updating. It is imperative to map how narrow processes relate to broader processes in order to avoid confusions. The designation phase is a constant task and identification is continuous. Approaches: 1. Specialization: general – special product/service 2. Horizontal: upstream – downstream processes 3. Vertical: main processes – sub-processes Value Chain Modelling Approach to Identify Processes (Horizontal example) It’s the chain of processes an organization performs to deliver value to customers and stakeholders. That is, a mechanism to group high-level business processes according to an order relation. The Evaluation Phase The Evaluation Phase has the objective of evaluating which processes should be focused on BPM initiatives. Note that not all are equally relevant or can receive the same level of attention. Some subjectivity is always evolved in these methods. The most popular methods to evaluate the processes’ criticality take attention to the processes’: Importance – Considers the importance that each process has on the organization’s strategic goals Dysfunction – Evaluates each process’ health, this is, which processes need to be managed first? Feasibility – : Developed based on how prone is each process of being successfully managed? How many processes should then be subjected to BPM? There is no straight answer to this question. Process Architecture A process architecture is a map that comprises the processes within an organization and its relationships with each other. 1. Process Landscape – Includes the main processes conducts in a very simple way 2. Abstract Process Models – Processes of the 1st level point to more developed business processes 3. Detailed Process Models – Processes of the 2nd level point to the final detailed version of the processes that, together, comprise the actual processes mapped, with every element detailed. The main difficulties in designing a PA are the subjectivity of composing the most relevant processes and the fact that the landscape (1st level of the architecture) needs to be easily understandable. Designing a Process Architecture (Dijkman Approach) This approach splits the 1st level of the process architecture in two distinct dimensions: Case Type Dimension – classifies the types of cases handled by an organization Business Function Dimension – Classifies the functions of an organization, ie, what the organization does Case Types A case is something that an organization/unit/department does. Normally, a case is a P/S that is delivered by the firm to its clients, such as a bank account (service) or a electronic device (product). Note that cases can characterize P/S that are “delivered” to the organization’s internal or external customers. Hence, case types can be purposely categorized using many types of different properties. Business Functions A business function categorizes the functions of an organization. A function is something that an organization does. Usually, a hierarchical decomposition of functions can be made: A function consists of sub-functions, which, in turn, consist of sub-sub-functions, etc. Designing a Process Architecture To arrive at a business process architecture as the one provided in the previous slide, an approach consisting of four steps is proposed: 1. Identify the organization’s case types 2. identify the organization’s functions for each case type stated earlier 3. construct one or more case/function matrices 4. identify the processes crossing each case type with business functions. Business Functions for Case Types The second step of building the process architecture consists of classifying the business functions which the organization conducts for each previously defined case type. Hence, it is required that each defined case type is roughly analyzed so that each business function conducted for it is not forgotten. In many cases, one can use a pre-existent reference model for coming up with the business functions (e.g., the APQC’s Process Classification Framework – see next slide). These reference models can only serve as a starting point to develop an initial classification of business functions, and then subsequently adapted to each organization in particular. Finally, the decision to what extent should the functional decomposition go needs to be made. Although one can go until the task-level each employee performs, a shallower decomposition is advised at this stage. Two rules of thumb may be used: 1. Decomposition should at least be performed down to a level at which functions correspond to different organizational units (with corresponding managers). 2. Decomposition should include different functions for the different roles in each department. Case/Function Matrices The previous two steps of the described approach led to a matrix that has the different case types as columns and the different functions as rows. A cell in the matrix contains an ‘X’, if the corresponding function can be performed for the corresponding case type. Identify Processes – Guideline 1 If a process has different flow objects, it can be split up vertically. A flow object is an object in the organization that flows through a business process. It is the object on which business process activities are being carried out. Typically, each business process has a single flow object, such that flow objects can be used to identify business processes. Consequently, if multiple flow objects can be identified in a business process, this is a strong indication that the process should be split up Identify Processes – Guideline 2 If the flow object of a process changes multiplicity, the process can be split up vertically. This is due to the fact that in a business process a single flow object is sometimes used, while at other times multiple flow objects of the same type are used. This is typical for batch processing, in which certain activities are performed for multiple customer cases in a batch at the same time. If, in the same process, the number of flow objects that is processed per activity differs this may be a reason for splitting up the process. Identify Processes – Guideline 3 If a process changes transactional state, it can be split up vertically. According to the action-workflow theory, a business process goes through a number of transactional states (initiation, negotiation, execution and acceptance). In the initiation state, contact between a customer and a provider is initiated In the negotiation state, the customer and the provider negotiate about the terms of service or delivery of a product During the execution state, the provider delivers the product or service to the customer and During the acceptance state, the customer and the provider negotiate about the acceptance and payment of the delivery A transition in a process from one state to another is an indication that the process can be split up. Identify Processes – Guideline 4 If a process contains a logical separation in time, it can be split up vertically. A process contains a logical separation in time, if its parts are performed at different time intervals. Intervals that can typically be distinguished include: once per customer request, once per day, once per month and once per year. Identify Processes – Guideline 5 If a process contains a logical separation in space, it can be split up horizontally. A process contains a logical separation in space, if it is performed at multiple locations and is performed differently at those locations. In other words, besides the spatial distance, the separation must be such that there is no choice but to perform the processes differently for the different logical units. Identify Processes – Guidelines 6 and 7 Guideline 6: If a process contains a logical separation in another relevant dimension, it can be split up horizontally. Like with the separation in space, it is not sufficient for processes to just be separated. The separation must be such that there is no choice but to perform the processes differently for the different logical units. Guideline 7: If a process is split up in a reference model, it can be split up. A reference process architecture is an existing process architecture that is pre-defined as a best-practice solution. It structures a collection of processes. For example, if a reference financial services process architecture exists, its structure can be used as an example or starting point to structure your own process architecture. Identify Processes – Guideline 8 If a process covers (many) more functions in one case type than in another, it can be split up horizontally. The application of this last rule depends upon the current decomposition of processes. If applied, it is necessary to look at the current decomposition of processes and check if, within a process, (many) more functions are performed for one case type than for another, i.e.: whether a process has many more crosses in one column than in another. If so, this is a strong indication that the process should be split up for these two case types. Limitation of Process Identification The project’s purpose is not clear The scope of the process is too narrow causing that the identified root-causes are located outside the scope The scope of the process is too wide making to a process improvement project that must be compromised in its lack of detail The process is identified in isolation to other projects leading to redundancies and inconsistencies between these projects Involved project members and stakeholders are not sufficiently informed The involved project members and stakeholders have not been correctly selected Orthogonal dimensions and not every processes that is one of them should be subjected to BPM initiatives Chapter 3 – Essential Process Modeling Business Process Models are important at various stages of the BPM lifecycle and they’re very useful to understand the process, share our understanding of the process with the people who are involved in it and to identify and prevent issues. In a business process, there are logical relationships between events and activities being the most basic relationship, a sequence composed by the three most simple symbols in BPMN: events, activities, and arcs. Process models should always be completed with an end event, even if it is obvious. Once a process instance has been spawned, a token is used to identify the progress (state) of that instance. Tokens are created in a start event, flow throughout the process model until they are destroyed in the end event. They’re the colored dots on top of the process model First Steps with BPMN Events, activities and process models should always be labelled. Some conventions should be followed: A model is characterized by three properties: mapping, abstraction, and fit for purpose. Branching and Merging Activities and events do not need to be performed one at each time. When/If two or more activities are alternative to each other, they are mutually exclusive When/If they can be performed in parallel, they are concurrent To model the relation between two or more alternative activities, a gateway must be used. Gateway – a mechanism that allows or disallows the passage of tokens being, this way, associated with a decision activity. Depending on the decision’s criteria, a token can be split or join: Split Gateway – represents a point where the process flow diverges (one incoming sequence flow, multiple outgoing sequence flows) Join Gateway – represents a point where the process flow converges (multiple incoming sequence flows, one outgoing sequence flow) NOTE: Different types of decisions also lead to different types of gateways. NOTE: You must close a gateway with its respective type 1. Exclusive Decisions – mutually exclusive conditions must be always employed (following XOR-split gateways and preceding XOR-join gateways). 2. Parallel Execution – For parallel / concurrent conditions (following AND-split gateways and preceding AND-join gateways). The token splits when it reaches the AND-split and takes all outgoing arcs. The divided tokens collectively represent the state of that instance. The AND-join waits for every token to arrive from each incoming arc to merge them back into one and proceed the instance (synchronization). Explicit vs Implicit decisions: There are two situations when a gateway can be omitted: Inclusive Decisions To model situations where a decision may lead to one or more options being taken at the same time (following OR-split gateways and preceding OR-join gateways NOTE: Since the OR-join semantics is to some extent complex, its presence may cause confusion. Thus, it should be used only when strictly required. Rework and Repetition Sometimes we may need to repeat some activities and, to model rework or repetition, we first need to identify the activities (or the fragment of the process) that can be repeated – a repetition block. Other BPMN 2.0 symbols Information Artifacts Focuses on the data perspective of the business processes, which indicates which information artifacts (documents, files, etc.) are needed to perform each activity and which ones are generated by which activity. 1. Data Objects – represent information flowing in and out of activities and can be physical artifacts such as a paper invoice, a letter or an electronic artifact such as an e-mail or a computer file. Data Objects may have different states throughout a business process (although including these “states” is optional). 2. Data Stores – is a collection of data objects that needs to be “live” beyond the duration of a process instance. Activities can read and write data objects from and to Data Stores. They are connected to activities via data associations. 3. Text Annotations – used to provide additional information to the model reader in order to improve the readability of the model. Don’t require any semantics since they don’t affect the flow of tokens throughout the model. Resources Also known as organizational perspective, it indicates who or what performs which activity. Therefore, “resource” is a generic concept to mention anyone or anything involved in the performance of a process: a process participant (ex: employee), a software system, a tangible equipment. Resources can be active (autonomously perform an activity – usually the important ones) or passive (merely involved in the performance of an activity). As it would be unbearable to specify each resource of the organization involved in a process, one usually refers to a set of resources (like an organizational unit, department, or role). BPMN provides 2 constructs to model resource aspects: 1. Pools – Used to model resource classes and usually to model a business party like a whole organization There are no restrictions as to what precise resource type a pool or a lane should model. A pool is often used to model a business party like a whole organization and a lane to model a department, unit, team or software system/equipment within that organization. 2. Lane – Used to partition a pool into sub-classes or single resources like a department, unit, team or software system/equipment within the organization. Lanes can be nested within each other in multiple levels. X(OR) – splits need to be placed in the same lane as the preceding activity. +(AND) – splits and all join gateways can be placed in any lane. Sequence of flows cannot cross pool boundaries, so we use message flows. Public view of a business party – Black box Private view of a business party/pool – White box Message Flow A message flow represents the stream of information between two separate resource classes (i.e., pools). It is depicted as a dashed line which starts with an empty circle and ends with an empty arrowhead, and bears a label indicating the content of the message, e.g. a fax, a purchase order, but also a letter or a phone call. Process Decomposition Complex business processes often lead to a process model too large to be apprehended in its entirety. To make the model clearer, we can simplify the process by hiding certain parts within a sub-process. A Sub-process is an activity composed of a set of work units which can be autonomously managed. How to use a sub-process? 1. Identify groups of related activities - i.e. those activities which together achieve a particular goal or generate a particular outcome in the process model under analysis. 2. Simplify the model by hiding the content of its sub-processes. This is done by replacing the macro-activity representing the sub-process with a standard-size activity. We know that an activity hides a sub-process by representing it with a small square with a plus sign “+” inside. This operation is called collapsing a sub-process. When a sub-process is collapsed the total number of visible activities is reduced. Therefore, in BPMN, a collapsed sub-process is the one which hides its internal steps, as opposed to an expanded sub-process which shows its internal steps. All in all, we should use sub-processes when a model has a dimension that adds a considerable degree of complexity to its reading as a whole. Nonetheless, given the subjectivity inherent to each individual’s understanding of different levels of complexity, a maximum limit of 30 flow objects (i.e. activities, events, gateways) was defined from which a process model must be decomposed. By using as few elements as possible per each process model and by decomposing it, behavioral issues will be avoided. There are several structural factors that may affect the readability of a process model: density of the process model connections number of parallel branches longest path from a start to an end event the way the process model is represented: the labels style (e.g. always use a verb-noun style), the colors palette, the lines thickness, and so on Value chain based process decomposition Level 1: value chain Simple linear description of the phases of the process; No gateways; Each activity chain is a sub-process Level 2+: expand each activity in the value chain Decisions, handoffs (lanes, pools); Parallel gateways, different types of events; Data objects & data stores; And as much detail as you need, and no more NOTE: Global process model - a process model that is not embedded within any process model, and as such can be invoked by another process models within the same process model collection Process Reuse By default a sub-process is intrinsic to its parent process model, and thus it can only be invoked from within that process model. However, when modeling a business process we often need to reuse parts of other process models of the same company. A global process model is a process model that is not embedded within any process model, and thus it can be raised by other process models within the same process model collection. To indicate that the sub-process being invoked is a global process model, the collapsed sub-process activity is represented with a thicker borderline. In conclusion, some syntactic rules should be underlined when using sub-processes: A sub-process is a regular process model. It should begin with at least one start event and conclude with at least one end event. If there are multiple start events, the sub-process will be triggered by the first event that occurs. If there are multiple end events, the sub-process will return control to its parent process only when each token flowing in this model reaches an end event. Finally, we cannot cross the boundary of a sub-process with a sequence flow. To pass control to a sub-process, or receive control from a sub-process, we should always use start and end events. Though, message flows can cross the boundaries of a sub-process to indicate messages that arise from, or are addressed to, internal activities or events of the sub-process. Advanced Process Rework and Repetition Expanded sub-processes can be used as an alternative to model the activities of a process that may be repeated. To identify a possible repetitive sub-process, a loop symbol should be represented and, if considered necessary, annotation can be used to specify the loop condition (e.g. “until the response is approved”). Furthermore, it is not mandatory to detail a loop sub-process; however, If you choose to do so, a decision activity must be added in the end, inside of the sub-process. If not, the time for the sub-process repetition would never be determined. Parallel Repetition The loop activity presents sequential repetition, meaning that the procedure has a stepwise (“step by step”) approach. Occasionally, we may be interested in executing multiple instances of the same activity at the same time. Useful when the same activity needs to be executed for multiple entities or data items, such as: Request quotes from multiple suppliers; Check the availability for each line item in an order separately; Send and gather questionnaires from multiple witnesses in the context of an insurance claim. A multi-instance subprocess contains a number of resource classes, or resources, with similar characteristics. A multi-instance pool is marked by a multi-instance symbol. Uncontrolled Repetition (Ad-Hoc subprocess) Refers to the repetition of activities, in no order, until a certain condition is reached. These activities are not controlled by the company. A partial order may be established among the activities of an ad-hoc sub-process through the sequence flow. Cannot have start and end events are represented with ‘~’. Handling Events Events are, by definition, of transitory nature, and this concept is not different within a process model. When process instances start, we have start events - tokens are created; When process instances are complete, we have the end events - tokens are destroyed. However, if the event happens during the process – intermediate event -, the token remains trapped in the incoming sequence flow of this event until it occurs. Consequently, the token traverses the event instantaneously, since events cannot retain tokens. Intermediate events are identified with a circle with a double borderline. Message Events Message events should be used only when the corresponding activity simply sends or receives a message End message event - indicates that the process completes after sending a message. Intermediate message event - indicates the reception of a message during the process. Intermediate and end message events are not represented in the same way as the activities that are used exclusively to send or receive messages. If the intermediate event signals a message being sent, the envelope is darkened. Untyped Event – Indicates that an instance of the process is created (start) or completed (end), without specifying the cause for creation/completion Start Message Event – Indicates that an instance of the process is created when a message is received End Message Event – Indicates that an instance of the process is completed when a message is sent Intermediate Message Event – Indicates that an event is expected to occur during the process. The event is triggered when a message is received or sent Message events should be used only when the corresponding activity simply sends or receives a message. Moreover, we can replace a send activity that is succeeded by an untyped end event with an end message event, as the process finishes after the message being sent. End message events are signaled with the end event symbol containing a darkened envelope inside. Please note: a start message event does not follow the same logic, as it is not the same as an untyped start event pursued by a receive activity. In the end message events’ case, process instances commence because of the receipt of a precise message; in the start message events’ case, process instances initiate at any moment, and subsequently the first activity takes place when the message is executed. Temporal Events Likewise the message events, the timer events can be determined for a start event, - in this case to show that process instances initiate after a specific time interval (e.g.: every Friday morning, every working day of the month, every morning at 7am). A timer event can also be an intermediate event, used to model a certain time period that needs to be respected in order for a process instance to start. For the timer event, we represent a light watch enclosed in the event symbol. When the time period is controlled by an external factor to the process, we are dealing with a timer event which is also a catching event. That is to say, the process does not define the time interval, it reacts to it instead. Racing Events – event-based decisions getaways. Frequently, two external events race against each other. The first happening leads the rest of the process. For instance, imagine that an insurance quote has been proposed to a client, the client may respond with an acceptance message, proceeding to an insurance contract, or with a rejection, discarding the quote. This race between external events is captured by means of the event-based exclusive (XOR) split. In regard to representation, an event-based exclusive split is signed by a gateway with an empty pentagon contained by a double-line circle. In BPMN, an internal decision can be assigned based on an event-driven decision, which will react in accordance with the event that occurs first. To prevent behavioral anomalies between pools’ interactions, gateways can be used. When designing message flows, the order of these connections must be assessed to avoid deadlocks. It is recalled that an internal decision in one party needs to be coupled with an event-based decision in the other party. Thus, an activity with an outgoing message flow will send that message when the activity is accomplished (throwing event), whereas an activity with an incoming message flow will depend on that message to start (catching event). Process Abortion An end terminate event instantaneously interrupts the process instance at the ongoing level, as well as any sub-process, destroying all tokens. Exclusively propagated downwards in a process hierarchy. Internal Exceptions At times, we manage to avoid aborting the overall process by interrupting the activity responsible for the exception. We may execute a recovery procedure so that the process recovers its normal operation. If this attempt fails, then abortion is inevitable. This attempt is formalized by the error event. The error event interrupts the sub-process and throws the exception, which is, subsequently, caught by an intermediate catching error event that is linked to the boundary of the same sub-process. Then, the recovery procedure is triggered by this boundary event over an outgoing branch named exception flow. External Exceptions An exception does not necessarily come from the process, it can result from an external event occurring during the activity. For instance, in a purchase order, when checking the available stock of the required product, the seller may receive a customer cancellation order. As a result, the seller must stop the current operation (stock check) to give place to another operation, which is the cancellation of the order. These exceptions are called unsolicited exceptions, which can be captured by linking a catching intermediate message event to an activity’s boundary. Regarding token semantics, when the intermediate message event is triggered, the token is taken out of the corresponding activity. Subsequently, this will cause the activity’s interruption, and the token will be shifted via the exception flow attached to the boundary event, activating the recovery procedure. Activity Timeouts Another exception may be an activity with an excessive processing time, which is, in turn, interrupted. To model this situation we have to assign a time interval to the activity, during which it has to be completed (e.g.: the payment of an order has to be made within three days otherwise the “purchase order” process ends). An intermediate timer event may be attached to the activity’s boundary, so the timer is activated when the corresponding activity begins. If the timer is fired before the activity is complete, it will determine its interruption (i.e. a timer event works as a timeout when attached to an activity’s boundary). Non-interrupting Events Nonetheless, there are exceptions created by external events that do not justify the disruption of process activities. This is the case of a customer who updates his personal data while the task "stock availability check” is running. Theoretically, these two actions will never compromise each other. To signal that the boundary event is non-interrupting, we represent it with a dashed double border, as we can observe in the next slide. Process Discovery Process Discovery is the act of gathering information about an existing process and organizing it in terms of an as-is process model. It requires information about the process to be modelled, which is a time-consuming activity. In order to address these issues, we can describe four stages of process discovery: 1. Defining the setting – assemble a team for working on the process; 2. Information Gathering – build an understanding of the process; 3. Modeling – organize the creation of the process model; 4. Assuring model quality – assure the resultant model possesses different quality criteria. To do so, two critical components are required: comprehend how a process takes place and have the technical expertise to model it accurately. As it is fairly easy to understand, these two activities are rarely possessed by the same person. In the next slides, the most common constraints in conducting the process discovery phase are introduced and discussed. The Setting Process Discovery In a typical real-world case, there are at least one analyst and, very often, several domain experts. Analysts and experts have complementary roles Three challenges in process discovery Profile of an Analyst Getting the right people on board Have hypotheses Identify patterns Pay attention to model aesthetics. Process Discovery Methods 1. Evidence-based Discovery – various pieces of evidence are typically available for studying how an existing process works: 2. Interviews-based Discovery – interviewing domain experts regarding how processes are performed. Interviews must be conducted with various domain experts involved in the process. There are two strategies available for scheduling interviews: starting backwards from the products and results of the process and starting at the beginning by proceeding forward. 3. Workshop-based Discovery – offers the opportunity to get a rich set of information on the business process. Several domain experts participate, as much as the process owner and the process analyst. Organizational Culture and its impact on Discovery Methods – Some firms practice a culture of openness in which every employee is encouraged to express its ideas and critics. Even in more rigid organizations it is advisable to pay special attention to equality among participants and that ideas / critiques are not constrained. Strengths and Limitations of Discovery Methods Objectivity - Evidence-based discovery methods typically provide the best level of objectivity (efficient insight into processes) Richness - While interview-based and workshop-based discovery methods show some limitations in terms of objectivity, they are typically strong in providing rich insights into the process (quality insight) Time Consumption - Discovery methods differ in the amount of time they require (time expensive insight) Immediacy of Feedback - Those methods that directly build on the conversation and interaction with domain experts are best for getting immediate feedback (quick results) Process Modeling Method - 5 stages: 1. Identify Process Boundaries – It is important to identify the events that trigger processes and their respective outcomes. (Events) 2. Identify Activities and Events - The goal of this second stage is to identify the process’ main activities. This way, the domain experts will clearly be able to state what they are doing even if they are not aware of it being part of an overarching business process. (What) 3. Identify Resources and their Handovers – Focusing on who is responsible for which activity. This allows us to create the foundations for pools and lanes’ definitions and the activities and events within them. (Who) 4. Identify the Control Flow – Answers the questions when and why, activities and events are performed. We need to identify order dependencies, decision points and concurrent executions, as well as potential rework and repetition. (When and Why) 5. Identify Additional Elements – Identify the artifacts (adding data objects, data stores their relations to activities and events via data associations) and exception handlers (using boundaries events, exception flows and compensation handlers). (Artifacts and Exceptions) Process Model Quality Assurance Assess the quality of the model with, at least, a process analyst and various domain experts. Developing a process model is done in a sequential way (not simultaneously) there’s need for several steps of quality assurance: 1. Syntactic Quality and Verification – The content of the model should comply with the syntax as defined by the process modeling language in use (ex: BPMN 2.0). Verification addresses formal properties of a model that can be checked without knowing the real-world process. Structural Correctness relate to the types of element that are used in the model and how they are connected; whereas Behavioral Correctness relates to potential sequences of execution as defined by the process model. 2. Semantic Quality and Validation – Semantic Quality relates to the aim of having models that make true statements about the considered domain, either for AS-IS processes or TO-BE processes. The process model has to represent the real process being modelled. Validation checks the semantic quality of a model by comparing it with the real-world business process. 3. Pragmatic Quality and Certification – Promotes the development of reader-friendly models, that is, building a process model of good usability. Modeling Guidelines and Conventions – important tool for assuring consistency and integrity for bigger modeling initiatives. The Seven Process Modeling Guidelines (7PMG): G1: Use as few elements in the model as possible. G2: Minimize the routing paths per element. G3: Use one start and one end event. G4: Model as structured as possible. G5: Avoid OR-gateways. G6: Use verb-object activity labels. G7: Decompose a model with more than 30 elements. Chapter 5 – Qualitative Process Analysis Qualitative Analysis – it’s a subjective analysis that aims to identify and eliminate waste (Value-Added and Waste Analysis) and identify and prioritize problems (Issue Register and Root-cause analysis). Value Added Analysis – aims to detect unnecessary steps (a task, or part of a task) in a process in order to eliminate them. (minimize BVA and eliminate NVA) The VAA follows the following order: Decompose each task of a process into steps: ○ Steps performed before a task ○ The task itself (possibly decomposed into smaller steps) ○ Steps performed after a task, in preparation for the next task Classify each step regarding its outcomes: ○ Value Adding (VA) – Produces value or satisfaction to the costumer (positive outcomes) ○ Business Value Adding (BVA) – The step is required for the business to run efficiently, to collect revenue, or it is required due to the regulatory environment of the business ○ Non-Value Adding (NVA) – Steps that don’t fit either of the previous classifications. Determine how to minimize BVA and eliminate NVA: ○ Minimizing BVA steps is a procedure that demands some caution. Before doing so, one should relate them to business goals and legal requirements; ○ Eliminating NVA steps may be done through automation of processes (ex: implementing an IS that allows all stakeholders to know what they need to do in order to move forward in a process). Waste Analysis Is the opposite of value-added analysis as the first looks at steps from the positive perspective whereas the second looks at the negative one. The Objective: find “waste” throughout processes. Note: some of the waste can take place not within steps but also between them (Move) Transportation – first and (perhaps) most prevalent source of waste. Send or receive materials or documents (incl. electronic) taken as input or output by the process activities. Lanes and pools are very helpful to determine transportation waste since it very often exists wherever a sequence flow goes from one lane to another (as this flow represents a handoff). Accordingly, in process models with multiple pools, message flows are also a potential waste. (Move) Motion – of resources internally within the process. Common in manufacturing processes, less common in service’s processes. (Hold) Inventory – arises whenever we hold more inventory than what is strictly necessary at a given point in time in order to maintain the production lines working. In business processes, inventory waste usually doesn’t take the form of Materials Inventory (physical inv.). Instead, it shows up in the form of Work-In-Process (WIP – the number of cases that have started and have not yet completed). (Hold) Waiting – Task waiting for materials or input data. Task waiting for a resource. Resource waiting for work (aka resource idleness). (Overdo) Defects – refers to all work performed in order to correct, repair, or compensate for a defect in a process. It encompasses rework, meaning situations where we must again perform a task that we have previously executed in the same case, because of a defect that occurred during the first time the task was performed. (Overdo) Over-processing – refers to work that is performed unnecessarily given the outcome of a process instance. Includes unnecessary perfectionism and tasks that are performed and later found not to be necessary. Issue Register – it has the purpose of maintaining, organizing and prioritizing identified weaknesses (issues). It provides a more detailed analysis of individual issues and their impact on the performance of the process. NOTE: An issue register contains both issues (direct impact in business performances) and factors (indirect impact in business performances – affect issues that will affect the BP). Source of Issues – they may arise from an input to a BPM project, collected as part of ongoing process improvements actions, collected during process discovery (modelling), from Value-added / waste analysis. Pareto Analysis and PICK Charts: This will help us invest our efforts in redesign only on certain issues. Root Cause Analysis – a family of techniques that helps analysts to identify and understand the root cause of issues or undesirable events. It is helpful to identify and understand the issues that prevent a process from having a better performance. Two RCA techniques are: Cause-Effect Diagrams – depict the relationship between a given negative effect (usually a recurrent issue or an undesirable level of a process performance) and its potential cause (divided into causal and contributing factors (called factors). Factors are grouped into categories which help guide the search for potential causes. 6 M’s categorization: Why-Why Diagrams - another technique to analyze the cause of negative effects, such as issues, in a business process. The basic idea is to recursively ask the question “Why?” until a factor that stakeholders perceive to be a root cause is found. The 5 Whys Principle states that 5 “whys” is enough for one to find the root cause of a negative effect (this should be treated as a guideline of how far one should go). NOTE: Why-why diagrams are a technique for structuring brainstorming sessions (Ex: workshops) for root cause analysis. Qualitative Process Analysis recommended procedure: 1. Segregate value-adding (VA), business value-adding (BVA) and non-value-adding (NVA) steps 2. Identify waste 3. Collect and systematically organize issues, assess their impact 4. Analyze root causes of issues. Chapter 6 – Quantitative Process Analysis Process Performance Measures – Any company would ideally like to make its processes faster, cheaper and better. In order to achieve such things, we will study 4 types of Process Performance Measures Material Costs – cost of tangible or intangible resources used per process instance; Resources’ Costs – Cost of person-hours employed per process instance. Resource utilization = 60% On average resources are idle 40% of their allocated time. Typically, when resource utilization > 90% ➔ Waiting time increases steeply Flow Analysis – a family of techniques that allow us to assess the global performance of a process given some knowledge about the performance of its activities. Flow analysis can be used to compute the average cost of a process instance knowing the cost-per execution of each activity, or the error rate of a process given the error rate of each activity. Process Cycle Time (includes waiting times) – is the average time it takes between the moment when a process starts and the when it ends. Accordingly, Activity Cycle is the average time between the moment it is set to be performed and the moment it finishes. Process Performance = Process Model + Performance of Each Activity Process cycle time is the average time it takes between the moment when a process starts and the when it ends. Accordingly, activity cycle is the average time between the moment it is set to be performed and the moment it finishes. Calculating Cycle Time Using Flow Analysis Cycle Time Efficiency (CTE)– An activity’s or process’ cycle time is consisted of: Waiting time – is the portion of the cycle time where no work is being done to advance the process Processing time – refers to the time that actors spend doing actual work. In most cases, waiting time takes a substantial proportion of the overall cycle time. Sometimes this is also because of third party actors. In many cases it may be beneficial to start by assessing the processing time and cycle time Cycle Time Efficiency (CTE) = Theoretical Cycle time (TCT) / Cycle Time (CT) Ratio close to 1 – there is little room for improving the cycle time without changes are introduced in the process Ratio close to 0 – there is a significant amount of room for improving cycle (waiting time takes a substantial proportion of the overall cycle time) Other Flow Analysis applications – it can also be applied to compute: The average cost of process instances (assuming we know the cost of each activity) The number of times on average each activity is executed Flow Analysis limitations – it only works in structured models and with fixed arrival rates: (Fixed Arrival Rates) Cycle Time analysis doesn’t consider: The rate at which new process instances are created (arrival rate) The number of available resources Why is flow analysis not sufficient? - Flow analysis does not consider waiting times due to resource contention - Queuing analysis and simulation address these limitations and have a broader applicability Fixed arrival rates Cycle time analysis does not consider: The rate at which new process instances are created (arrival rate) The number of available resources Higher arrival rate at fixed resource capacity: high resource contention; higher activity waiting times (longer queues); higher activity cycle time; higher overall cycle time. The slower you are, the more people have to queue up and vice-versa. Cycle time and flow analysis does not deal with delays. For that we use queue theory Capacity problems are common and a key driver of process redesign Need to balance the cost of increased capacity against the gains of increased productivity and service. Queuing and waiting time analysis is particularly important in service systems Large costs of waiting and/or lost sales due to waiting. Queuing Analysis – has the advantage of dealing with delays: Capacity problems are common and a key driver of process redesign (the cost of increased capacity needs to be balanced against the gains of increased productivity and service). Queuing and Waiting Time Analysis is very important in service systems (large costs of waiting and/or lost sales are due to waiting). - Job inference: variable but spaced apart traffic regarding arrivals. - Deterministic traffic (is this common?) - Variable but spaced apart traffic (what about this?) - Business: results from variability in processing times and/or interarrival times. - High utilization: the queuing probability increases as the location increases. Burstiness Burstiness is the intermittent increases and decreases in activity or frequency of an event, which causes interference on queues. Natural arrivals are bursty. Queuing results from variability in processing times and/or interarrival times High Utilization The queuing probability increases as the load increases. Utilization close to 100% is unsustainable as it leads to too long queuing times. The Poisson Process - applicable when the next arrival does not depend on how long ago the previous arrival occurred Basics of queueing theory - M/M/C If the above conditions are satisfied, but there are multiple servers instead of a single server, the queueing system is said to be M/M/c, where c is the number of servers. For example, a queue is M/M/5 if the inter-arrival times of customers follow an exponential distribution, the processing times follow an exponential distribution and there are five servers at the end of the queue. The “M” in this denomination stands for “Markovian”, which is the name given to the assumptions that inter-arrival times and processing times follow an exponential distribution. Other queueing models exist that make different assumptions. Each such model is different, so the results we will obtain for an M/M/1 or M/M/c queue are quite different from those we would obtain from other distributions. Limitations of queuing models Can be used to analyze waiting times (and hence cycle times), but not cost or quality Measures Suitable for analyzing one single activity at a time, performed by one single resource pool. Not suitable for analyzing end-to-end processes consisting of multiple activities performed by multiple resource pools Process Simulation – consists in generating and executing every step of a wide number of imaginary instances and recording it. Elements of simulation 1. Model the process 2. Define a simulation scenario To identify bottlenecks and register time Inputs for each task – the probability distribution Responsible resource pool The start event mean inter-arrival time and its associated probability distribution (one can use a Goodness-of-Fit test); Cost/added-value Input for simulation ○ mean interarrival time at start event ○ Probability distribution ○ Resource Pool required for task's execution Starting date and time of simulation One of the following: ○ The end date and time of the simulation ○ The real-time duration of the simulation ○ The required number of process instances to be simulated. ○ Choice of probability distribution Fixed: same for all executions of this task – rare (ex. Softwares) Exponential: when the processing time of the task is most often around a given mean value, but sometimes it is considerably longer – complex activities that involve analysis/decisions Normal: when the processing time of the task is around a given average and the deviation around this value is symmetric, which means that the actual processing time can be above or below the mean with the same probability – repetitive activities. 3. Run the simulation 4. Analyse the simulation outputs 5. Repeat for alternative scenarios Elements of simulation 1. Processing times of activities a. Fixed value b. Probability distribution 2. Conditional branching probabilities 3. Arrival rate of process instances and probability distribution a. Typically exponential distribution with a given mean inter-arrival time 4. Resource pools a. Name b. Size of the resource pool c. Cost per time unit of a resource in the pool d. Availability of the pool (working calendar) 5. Assignment of tasks to resource pools Chapter 7 – Process Redesign Identify possibilities for improving the design of a process. AS-IS: Descriptive modelling TO-BE: Prescriptive modelling - In the area of services, more “degrees of freedom” exist in redesigning processes Heuristics Transactional: changes the “as is” process incrementally; Inward-looking: operates within the scope and context of “as is” process; Analytical: based on redesign heuristics that strike tradeoffs between: ○ Cost ○ Time ○ Quality ○ Flexibility Redesign heuristics Task-level: ○ 1. Elimination ○ 2. Composition or decomposition ○ 3. Triage. Flow-level: ○ 4. Re-sequencing ○ 5. Parallelism enhancement. Process-level: ○ 6. Specialization and standardization ○ 7. Resource optimization ○ 8. Communication optimization ○ 9. Automation. NOTE: Each Heuristic improves one dimension(s) at the expense of other(s). Five principles of Business Process Reengineering (BPR) 1. Capture information once and at the source 2. Include information-processing work into the real work that produces the information 3. Have those who use the output of the process drive the process 4. Put the decision point where the work is performed, and build control into the process 5. Treat geographically dispersed resources as though they were in the same place BPR – 1 st Principle Capture information once and at the source Shared data store All process workers access the same data Don’t send around data, share it! Self-service Customers capture data themselves Customers perform tasks themselves (e.g. collect documents). BPR – 2nd principle Include information-processing work into the real work Evaluated receipt settlement: when receiving the products, record the fulfillment of the PO, which triggers payment BPR – 3 rd principle Have those who use the output of the process drive the process Vendor-managed inventory Scan-based trading Push work to the actor that has the incentive to do it BPR – 4th principle Put the decision point where the work is performed, and build control into the process Empower the process workers Provide process workers with information needed to make decisions themselves Replace back-and-forth handovers between workers and managers (transportation waste) with well-designed controls. BPR – 5 th principle Treat geographically dispersed resources as though they were centralized If same people perform the same function in different locations, integrate and share their work wherever possible; Larger resource pools -> fewer waiting times even with relatively high resource utilization. Chapter 8 – Process Mining BPM vs PM PM vs Data Science Data science is an interdisciplinary field aiming to turn data into real value. Value may be provided in the form of predictions, automated decisions, models learned from data, or any type of data visualization delivering insights Process mining adds the process perspective to machine learning and data mining parts. Seeks the confrontation between event data. ○ Process Mining is a data analysis technique to reconstruct, analyze and improve business processes based on log data from transactional IT systems. ○ Process Mining bridges the gap between model-based process analysis and data centric analysis techniques. Process mining methodology The digital footprints are the starting point of Process Mining Independent of the system, the data always contains three important pieces of information: - information about the process steps or activities that have been conducted - information about the points in time at which the activities were carried out - information about the object or ID for which the activities have been executed The combination of these three pieces is called a 'digital footprint'. Event Logs are the format in which we can retrieve our digital footprints from the underlying IT systems They're essentially the log books that IT systems keep to record what events take place for each Case ID and at what time An event log is the data that is required for Process Mining. We assume that: an event log contains data related to a single process each event in the log needs to refer to a single process instance, often referred to as case events can be related to some activity events within a case need to be ordered At the minimum, the event log must cover 3 columns: case ID, activity name, timestamp. There may be optional columns, so called process attributes Case ID: indicates which process instance the event belongs to. A case usually consists of multiple events Activity: describes the action that is captures by the event Timestamp: indicates the time when the event take place Trace: a sequence of events, ordered by their timestamp, that belong to the same case Process model types: - Derived from the actual event logs: As-is model - Defined based on enhancements: Should-be model Case: The object you are following and for which events occur. Examples are customers, patients, machines etc. Activity: An action or task that can be performed for a process instance Variant: A specific sequence of activities / unique path from the very beginning to the very end of the process. “Happy path”: The most common Variant (the variant with the most cases) Process Discovery - The first type of process mining is discovery A discovery technique takes an event log and produces a model without using any a priori information. If the event log contains information about resources, one can also discover resource related models. Develop process models based on event logs, generate a theoretical model from processes in reality uncover how processes are executed by replicating a process model from the event log. Process Conformance An existing process model is compared with an event log of the same process. Conformance checking can be used to check if reality, as recorded in the log, conforms to the model and vice versa. By scanning the event log using a model specifying requirements, one can discover potential cases of fraud. Hence, conformance checking may be used to detect, locate and explain deviations, and to measure the severity of these deviations. Process Enhancement Aims at changing or extending the apriori model. One type of enhancement is repair, i e modifying the model to better reflect Reality. Another type of enhancement is extension, i e adding a new perspective to the process model by cross correlating it with the log. Use insights from discovery and conformance to improve the process. Why is process mining different from the typical methods for improving business processes? Process Mining allows us to constantly monitor and improve the process. It offers objective, fact-based insights, derived from actual data, to help you audit, analyze, and improve your existing business processes. It is faster, cheaper and more accurate than the lengthy and often subjective process mapping workshops it replaces Process mining works on top of your existing systems, so there is no rip-and-replace involved Is the only method that automatically can discover and analyze a process, and at the same time, monitor it with KPIs and dashboards. Allows performing simulations to optimize the business without missing any detail

Use Quizgecko on...
Browser
Browser