Podcast
Questions and Answers
In linear programming, which component represents the goal that we want to optimize?
In linear programming, which component represents the goal that we want to optimize?
- Objective function (correct)
- Feasible region
- Decision variables
- Constraints
What does the feasible region in a linear programming problem represent?
What does the feasible region in a linear programming problem represent?
- The set of all possible objective function values.
- The set of optimal solutions to the linear programming problem.
- The set of all constraint equations.
- The set of all decision variable combinations that satisfy all constraints. (correct)
In linear programming, what does a constraint of the form $x + y \geq 50$ represent?
In linear programming, what does a constraint of the form $x + y \geq 50$ represent?
- A resource limit
- A minimum requirement (correct)
- An exact requirement
- A non-negativity requirement
Which of the following statements is true regarding slack variables in linear programming?
Which of the following statements is true regarding slack variables in linear programming?
According to linear programming theory, if an optimal solution exists, where will it occur?
According to linear programming theory, if an optimal solution exists, where will it occur?
Which type of variable is introduced to represent the extra amount above a minimum requirement in a linear programming constraint?
Which type of variable is introduced to represent the extra amount above a minimum requirement in a linear programming constraint?
What does it mean if a linear programming problem has an unbounded objective?
What does it mean if a linear programming problem has an unbounded objective?
What is the primary difference between a seasonal and a cyclical pattern in time series data?
What is the primary difference between a seasonal and a cyclical pattern in time series data?
In time series forecasting, what characterizes a horizontal or stationary pattern?
In time series forecasting, what characterizes a horizontal or stationary pattern?
What is the main assumption behind the naive forecasting method?
What is the main assumption behind the naive forecasting method?
Why is a moving average method suitable for data with a horizontal pattern?
Why is a moving average method suitable for data with a horizontal pattern?
In a weighted moving average, how are weights typically assigned?
In a weighted moving average, how are weights typically assigned?
What is the purpose of the smoothing constant ($\alpha$) in exponential smoothing?
What is the purpose of the smoothing constant ($\alpha$) in exponential smoothing?
What does a higher Mean Squared Error (MSE) imply about a forecasting model?
What does a higher Mean Squared Error (MSE) imply about a forecasting model?
Which forecast accuracy metric is most useful for communicating accuracy to non-technical stakeholders?
Which forecast accuracy metric is most useful for communicating accuracy to non-technical stakeholders?
In trend projection with least squares, what is being minimized to find the best fitting line?
In trend projection with least squares, what is being minimized to find the best fitting line?
In time series analysis, what is the purpose of using dummy variables when accounting for seasonality?
In time series analysis, what is the purpose of using dummy variables when accounting for seasonality?
When using dummy variables for quarterly seasonality, how many dummy variables would you typically use?
When using dummy variables for quarterly seasonality, how many dummy variables would you typically use?
In the context of forecasting, what is a "seasonal index"?
In the context of forecasting, what is a "seasonal index"?
For time series data exhibiting a clear upward trend and seasonality, which forecasting approach is most appropriate?
For time series data exhibiting a clear upward trend and seasonality, which forecasting approach is most appropriate?
Flashcards
Linear Programming (LP)
Linear Programming (LP)
A mathematical optimization technique to find the best outcome given certain limitations, maximizing good and minimizing bad.
Decision Variables
Decision Variables
Quantities controlled or decided, represented by symbols, when formulating a linear programming problem.
Objective Function
Objective Function
A linear equation we want to maximize or minimize, based on decision variables. Represents the goal.
Constraints
Constraints
Signup and view all the flashcards
Slack Variable
Slack Variable
Signup and view all the flashcards
Surplus Variable
Surplus Variable
Signup and view all the flashcards
Time Series Patterns
Time Series Patterns
Signup and view all the flashcards
Horizontal (Stationary) Pattern
Horizontal (Stationary) Pattern
Signup and view all the flashcards
Trend Pattern
Trend Pattern
Signup and view all the flashcards
Seasonal Pattern
Seasonal Pattern
Signup and view all the flashcards
Cyclical Pattern
Cyclical Pattern
Signup and view all the flashcards
Naïve Forecasting Method
Naïve Forecasting Method
Signup and view all the flashcards
Moving Averages (MA)
Moving Averages (MA)
Signup and view all the flashcards
Weighted Moving Average (WMA)
Weighted Moving Average (WMA)
Signup and view all the flashcards
Exponential Smoothing
Exponential Smoothing
Signup and view all the flashcards
MAE (Mean Absolute Error)
MAE (Mean Absolute Error)
Signup and view all the flashcards
MSE (Mean Squared Error)
MSE (Mean Squared Error)
Signup and view all the flashcards
MAPE (Mean Absolute Percentage Error)
MAPE (Mean Absolute Percentage Error)
Signup and view all the flashcards
Trend Projection with Least Squares
Trend Projection with Least Squares
Signup and view all the flashcards
Dummy Variables
Dummy Variables
Signup and view all the flashcards
Study Notes
Linear Programming (Chapter 7)
- Linear Programming (LP) is a mathematical optimization technique to find the best possible outcome given certain limitations.
- It involves choosing levels of decision variables to maximize good things or minimize bad things.
- Restrictions are called constraints, which represent real-world limits.
- LP models are linear, involving linear combinations of decision variables, making them solvable with efficient methods.
Formulating an LP: Decision Variables, Objective, and Constraints
- Three main components are needed to formulate a linear programming problem: decision variables, an objective function, and constraints (DOC).
- Decision Variables: quantities that can be controlled and decided, represented by symbols.
- Objective Function: A single linear function of the decision variables that needs to be optimized, representing the goal.
- Maximize profit = 5x + 3y if each unit of Product A gives $5 and Product B gives $3 of profit.
- Minimize cost = 2x + 4y if those are costs.
- Constraints: limitations or requirements expressed as linear inequalities or equations that involve decision variables.
- Resource limitations: 2x + y ≤ 100 hours of labor, or x + 2y ≤ 80 units of material.
- Policy requirements: x ≥ 10 if producing at least 10 of A.
- The constraints narrow down the possible choices, forming the feasible region of all possible decision variable combinations that satisfy every constraint.
Graphical Solution Method: Feasible Region and Extreme Points
- Solving an LP graphically can be done when there are two decision variables for visualization on a 2D graph.
- Each constraint is a line, splitting the plane into allowed and disallowed areas.
- The common area when overlapping all these constraints is the feasible region, which is often a polygon.
- The goal is to find the point in the feasible region that gives the best objective value.
- The optimal solution exists at one of the "corner" points of the feasible region.
- Small LPs can be solved by finding the coordinates of all corner points, computing the objective value at each, and picking the best one.
Finding the Optimal Solution Using Objective Function Lines
- The objective equation (Z = 5x + 3y for profit) can be drawn as a line too, for a given value of Z.
- Increasing Z will result in a parallel line further outward.
- The trick lies in sliding the objective line parallel to itself in the direction that improves the objective.
- For maximization, move it outwards to higher Z; and for minimization, move it inward to lower Z, until you cannot without leaving the feasible region.
- The last point where the line touches the feasible region is the optimal point.
Standard Form and Slack/Surplus Variables
- In a ≤ constraint (resource limit type, e.g. 2x + y ≤ 100), a slack variable is added.
- 2x + y + s = 100, s ≥ 0.
- s (slack) is the leftover capacity and if s > 0, it means some slack is left, and if s = 0, the constraint is binding.
- In a ≥ constraint (minimum requirement type, e.g. x + y ≥ 50), a surplus variable is introduced.
- x + y – e = 50, e ≥ 0.
- e (surplus) is how much the requirement is exceeded. If surplus is 0 it's meeting the requirement exactly, and if surplus is positive it's exceeding the minimum.
- Any inequality is turned into an equality, which is handy for solving methods.
- A linear program in standard form looks like: maximize or minimize a linear objective, subject to equations, and all variables are ≥ 0.
Example Scenarios: Investment Portfolio and Advertising Allocation
- Investment Portfolio Optimization: Maximize return = 0.05X + 0.03Y.
- X = Stocks and Y = Bonds.
- Maximize return with a budget of $100k and at least $20k in bonds.
- Advertising Allocation:
- Variables: o = online ads, t = TV ads.
- Maximize reach = 1000o + 8000t, with budget = $5000, meaning 100·o + 1000·t ≤ 5000, o, t ≥ 0.
Recap of Linear Programming
- Linear programming is an optimization approach for the values of variables to maximize or minimize a linear objective while respecting linear constraints.
- Decision variables includes, an objective function and constraints.
- The feasible solutions form a region, and the optimal solution is at a corner point.
- Slack and surplus variables are added to constraints to convert them into equalities.
- "DOC” – Decision variables, Objective, Constraints.
Time Series Forecasting (Chapter 6)
- Time series forecasting uses historical data to predict future events.
- Identification of the underlying pattern in the data guides which forecasting method will work best.
Patterns in Time Series Data
- Types are horizontal, trend, seasonal, and cyclical patterns; describing the general trajectory or behavior of the data over time.
- Horizontal (Stationary) Pattern: data points fluctuate around a constant mean with no long-term trend.
- Trend Pattern: overall long-term increase or decrease in the data, like linear or exponential trends.
- Seasonal Pattern: repeating patterns at fixed intervals, like retail sales peaking every December.
- Cyclical Pattern: similar to seasonality, but over a period longer than one year and not at a fixed regular interval (linked to economic cycles).
Naive Forecasting Method
- Naïve method assumes the next period’s value will be the same as the last observed period.
- If data is roughly horizontal with no trend or seasonality, the naive method is effective.
- "Seasonal naive” approach is a variation that uses the actual value from last July to forecast for this July, when there’s seasonality.
Moving Averages (Simple Moving Average)
- Moving average (MA) smooths out randomness for stable, horizontal patterns, providing a reasonable forecast by averaging the last N observations.
- Good usage occurs when there is no strong trend or seasonality.
- Longer window = more smoothing and more lag.
- Shorter window = reacts faster and more noise.
Weighted Moving Average
- Weighted moving average (WMA) assigns different weights to each point, usually giving more weight to recent observations.
- This forecast is (0.5 * value_{t-1} + 0.3 * value_{t-2} + 0.2* value_{t-3}).
- The most recent observations are more relevant for predicting the near future.
Exponential Smoothing
- Exponential smoothing suits series without a pronounced trend or seasonality.
- F_{t+1} = \alpha \cdot A_t ;+; (1-\alpha) \cdot F_t.
- It gives the highest weight to the most recent actual, and weights for older data decay exponentially
- Smoothing constant α controls how quickly the echoes die out.
- Results are a compromise between the last actual and the last forecast (adaptive and smooth).
Forecast Accuracy Metrics: MAE, MSE, MAPE
- MAE (Mean Absolute Error): average of the absolute errors. the formula: \text{MAE} = \frac{1}{n} \sum_{t=1}^{n} |A_t - F_t|.
- MSE (Mean Squared Error): average of the squared errors and penalizes larger errors. the formula: \text{MSE} = \frac{1}{n} \sum (A_t - F_t)^2.
- MAPE (Mean Absolute Percentage Error): looks at errors in percentage terms.
- the formula: \text{MAPE} = \frac{1}{n} \sum_{t=1}^{n} \left|\frac{A_t -F_t}{A_t}\right| \times 100%.
- MAE is a straightforward average error magnitude.
- MSE (or RMSE) heavily punishes big errors.
- MAPE gives a scale-free percentage score.
Trend Projection with Least Squares (Example: Auger’s Plumbing)
- Linear regression with time as the independent variable should be used for time series exhibiting a trend.
- Auger's Plumbing had model that predicts calls with the equation, \hat{Y} = 349.67 + 7.4 \times t.
Seasonality with Dummy Variables (Combining Trend and Seasonality)
- Multiple linear regression is used with time and seasonal dummy variables in order to account for the effects and make forecasts accurately.
- Seasonal adjustments for Q1, Q2, Q3 relative to Q4 with the model, Y_t = a + b \cdot t + c_1 D_{1,t} + c_2 D_{2,t} + c_3 D_{3,t} + \epsilon_t.
Recap of Forecasting Techniques
- Identifying patterns aids the choice of method (horizontal, trend, seasonal, cyclical).
- Basic forecasting methods include: the naive method, moving average, weighted moving average, and exponential smoothing.
- Check accuracy metrics to see if the method is performing well.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.