Validating Formative PLS Models (PDF)

Summary

This paper provides a methodological review of formative model identification and evaluation, with an empirical illustration of using SmartPLS. It discusses formative versus reflective constructs and offers a guideline for estimating formative measurement and structural models, focusing on different types of validation techniques.

Full Transcript

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/221598445 Validating Formative Partial Least Squares (PLS) Models: Methodological Review and Empirical Illustration. Conference Paper · January 2009 Source: DBLP CITATIONS...

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/221598445 Validating Formative Partial Least Squares (PLS) Models: Methodological Review and Empirical Illustration. Conference Paper · January 2009 Source: DBLP CITATIONS READS 160 3,945 4 authors: Pavel Andreev Tsipi Heart University of Ottawa Ono Academic College 36 PUBLICATIONS 955 CITATIONS 63 PUBLICATIONS 1,557 CITATIONS SEE PROFILE SEE PROFILE Hanan Maoz Nava Pliskin Bar Ilan University Ben-Gurion University of the Negev 9 PUBLICATIONS 334 CITATIONS 123 PUBLICATIONS 2,564 CITATIONS SEE PROFILE SEE PROFILE All content following this page was uploaded by Pavel Andreev on 22 October 2014. The user has requested enhancement of the downloaded file. VALIDATING FORMATIVE PARTIAL LEAST SQUARES (PLS) MODELS: METHODOLOGICAL REVIEW AND EMPIRICAL ILLUSTRATION Completed Research Paper Pavel Andreev Tsipi Heart Ben-Gurion University of the Negev Ben-Gurion University of the Negev Beer-Sheva Beer-Sheva [email protected] [email protected] Hanan Maoz Nava Pliskin Ben-Gurion University of the Negev Ben-Gurion University of the Negev Beer-Sheva Beer-Sheva [email protected] [email protected] Abstract The issue of formative constructs, as opposed to the more frequently used reflective ones, has recently gained momentum among IS and Management researchers. Most researchers maintain that formative constructs have been understudied, and that there is paucity in methodological literature to guide researchers on how such constructs should be developed and estimated. A survey of IS research has revealed that about 29% of constructs were misspecified as reflective rather than formative constructs. Furthermore, guidelines about how models containing formative constructs should be indentified and estimated are fragmented and inconsistent. Thus, this paper aims to present a methodological review of formative model identification and evaluation. We bring a brief theoretical overview of formative constructs, and put together a guideline for estimating formative measurement and structural models. We then present a simplified model composed of three formative constructs and illustrate how it is assessed and estimated using SmartPLS. Keywords: Partial Least Squares, formative constructs, pure formative PLS model, validation, BVIT, SmartPLS Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 1 Research Methods Introduction The issue of formative constructs, as opposed to the more frequently used reflective ones, has recently gained momentum among IS and Management researchers (Coltman, Devinney, Midgley and Venaik, 2008; Diamantopoulos, Riefler and Roth, 2008; Petter, Straub and Rai, 2007; Tenenhaus, 2008; Wilcox, Howell and Breivik, 2008). Most researchers maintain that formative constructs have been understudied, and that there is paucity in methodological literature to guide researchers on how such constructs should be developed and estimated (Wilcox et al., 2008). Several reasons have been put forward as causing this situation, among them are lack of support of formative constructs by popular covariance-based structural equations modeling (SEM) software packages such as LISREL and AMOS, and difficulty in identifying and estimating formative constructs (Diamantopoulos et al., 2008). Previous review of the literature revealed that about a third of constructs have been misspecified as reflective instead of formative in both the IS and Management literature (Jarvis, MacKenzie and Podsakoff, 2003; Petter et al., 2007), and that guidelines about how formative constructs in particular, and models containing formative constructs in general, should be indentified and estimated, are fragmented and inconsistent. This is especially true regarding formative constructs error estimation (Diamantopoulos, 2006) and construct convergent validity testing (Diamantopoulos and Winklhofer, 2001; Howell, Breivik and Wilcox, 2007). The evaluation of Partial Least Squares (PLS) models has been well covered in the literature for PLS models with reflective constructs (e.g., Chin, 1998b; Gefen, Straub and Boudreau, 2000; Straub, Boudreau and Gefen, 2004), where widely accepted predefined model validation statistical tests are available (Straub et al., 2004). This, however, is not the case for PLS models with formative constructs, an approach recently gaining increasing attention, albeit the usage of formative indicators for defining latent variables is still scarce (Diamantopoulos et al., 2008). In building research models, the tendency of researchers to focus on structural models and to leave measurement models less attended has led to specification problems (Diamantopoulos et al., 2008; Jarvis et al., 2003; Petter et al., 2007), leaving the issue of formative construct validation unresolved. Consequently, some researchers (e.g. Foulds, Quaddus and West, 2007) have validated models containing formative constructs with the same procedures and statistical tests typically used for reflective models (Chin, 1998a; Diamantopoulos and Winklhofer, 2001; Jarvis et al., 2003). All these concerns have raised a wave of skepticism regarding the problematic nature of formative models testing (Wilcox et al., 2008) and thus, one of the challenges this study faces from a methodological viewpoint is to identify and classify the required statistical tests for validating formative models. Hence, this study aims to present a methodological review of formative, PLS-based, model identification and evaluation, focusing on the issue of formative construct assessment. We bring a brief theoretical overview of formative versus reflective constructs, summarize the decision criteria underlying the construct identification issue, and put together a guideline for evaluating formative constructs assuming the variance of the disturbance or error term to be zero. The theoretical part is then followed by an empirical illustration of the presented methodology using a simplified model composed of three formative constructs. Although the empirical illustration uses a somewhat simplistic model, we maintain that it is instrumental in illustrating the presented methodology, thus contributing to the Research Methods body of knowledge by organizing the disparate and inconsistent information about formative constructs, and by methodologically guiding researchers through the actual task of evaluating formative constructs step by step. By so doing it enhances the work of Petter et al. (2007) who focused on the relationship between formative measurement items and their constructs, rather than on the full methodology of formative models evaluation. The rest of the paper is organized as follows. The first part is a methodological overview about SEM focusing on PLS and formative constructs. We then proceed with a theoretical description of formative constructs estimation, dealing with the thorny issue of convergent validity, as well as content and construct validity. The empirical illustration is brought next, showing a step by step model assessment by evaluating the measurement model first, and the structural model afterwards (Anderson and Gerbing, 1988). We conclude with a short discussion and suggestions for future works in this field. SEM in a Nutshell SEM is a considerably complex statistical technique (Gefen et al., 2000; Golob, 2001) for assessing relations between constructs, including latent variables (LVs) and observed variables. LVs represent conceptual terms used to express theoretical concepts or phenomena. Observed variables, also referred to as measures, indicators or items, are variables that are measured directly. Latent variables can be exogenous - typically denoted ξ, and endogenous - 2Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 Andreev et al. / Validating Formative Partial Least Squares (PLS) Models typically denoted η. Exogenous LVs are those from which arrows are only emitted to other variables in the model. According to the widely accepted diagrammatic SEM syntax, paths connecting ξs to ηs are represented statistically as γ coefficients (Gefen et al., 2000), while paths connecting one η to another are designated with β. The SEM measurement model contains the observed measures, taken from the actual data collected, and the latent constructs. In a typical SEM diagram, X depicts the measures of exogenous constructs and Y depicts the measures of endogenous constructs. Therefore, each X should be connected to the related exogenous construct ξ, and each Y to the related endogenous construct η. Paths between the observed variables X or Y and the latent variables ξ or η are assigned with λ (Gefen et al., 2000), which represent item loadings or weights, depending on whether the construct is reflective (arrows point from the LV to the measured indicators) or formative (arrows point from the measured indicators to the LV). Readers are referred to Petter et al. (2007), page 629 for the relevant figure. The differences between formative and reflective constructs will be further explained thereafter. SEM differs from first-generation regression tools by involving the following (Chin, 1998a): 1) relationships among multiple predictor and criteria variables, 2) unobservable LVs, 3) errors in observed or latent variables, and 4) statistically a priori testing of theoretically substantiated assumptions against empirical data (i.e. confirmatory analysis). Two types of SEM methods exist: covariance-based, and component-based or Partial Least Squares (PLS). The covariance-based SEM (CovSEM) method, traditionally considered the best known SEM method (Chin, 1998b), is popular among many research disciplines, with a widespread availability of software programs as LISREL, AMOS, CALIS, EQS, and SEPATH. CovSem attempts to calculate model parameters that will minimize the difference between the calculated and observed covariance matrices, yielding goodness of fit indices as a result of the magnitude of these differences. The component-based SEM method, also referred to as the Partial Least Squares (PLS) method, is a distribution-free approach that might be presented as a two-step method (Tenenhaus, 2008). The first step refers to path estimates of the outer (measurement) model used to compute LV scores. The second one refers to path estimates of inner (structural) model, where Ordinary Least Squares (OLS) regressions are carried out on the LV scores for estimating the structural equations. Unlike covariance-based SEM, PLS attempts to estimate all model parameters in such a way that the result should be a minimized residual variance of all depended variables (DV), LVs, and observed variables (of the reflective LVs) (Chin, 1998b; Diamantopoulos, 2006; Gefen et al., 2000), namely, maximize the explained variance. In other words, the main objective of the PLS approach is to best predict of LVs by the DVs, instead of obtaining a good fit to the data, which is the main goal of the CovSEM approach. Thus, PLS is intended mainly for prediction purposes while CovSEM is focused on parameter estimation. consequently, PLS and CovSEM techniques differ in terms of objectives, assumptions, parameter estimates, latent-variable scores, implications, epistemic relationship between a latent variable and its measures, model complexity, and sample size (Chin and Newsted, 1999). Tenenhaus (2008) highlighted some of the PLS weaknesses. First, PLS path-modeling software suffers from the lack of widespread accessibility because the diffusion of the PLS software is limited in comparison with CovSEM software. Second, PLS is more heuristically used for exploratory research Chin, 1998b. Third, unlike CovSEM, PLS does not allow testing equality constraints on path coefficients or defining specific imposing values to different model paths. PLS, however, has some advantages over CovSEM since it exerts minimal demand on the measurement scale, the sample size needed for PLS is smaller than for CovSEM, a large number of variables can be handled with PLS, it employs simpler algorithms, estimates of latent constructs in PLS have a more practical meaning since its formation is clear, it allows building a complex framework of a multi-block analysis, and finally, it eases the task of estimating all-formative constructs (Diamantopoulos and Winklhofer, 2001; Tenenhaus, 2008)1. The next subsection considers rules for identifying formative and reflective constructs. Identifying Formative or Reflective Constructs The choice of which construct mode, formative or reflective, should be used in designing a model received much attention in the literature (Coltman et al., 2008; Diamantopoulos et al., 2008; Jarvis et al., 2003; MacKenzie, Podsakoff and Jarvis, 2005; Petter et al., 2007). Petter et al. (2007) examined publications in MIS Quarterly and 1 In principle, models with formative constructs can be tested within covariance-based structural analysis, yet such models are often associated with identification problems that are overcome by using MIMIC models or including reflective items in addition to the formative ones. This, however, is not an issue in PLS Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 3 Research Methods Information Systems Research between 2003 and 2005 eliciting 29% of the studies with misspecification problems. These findings corroborated those of Jarvis et al. (2003) who conducted a similar review of studies within the Marketing and Consumer research, demonstrating via a Monte Carlo simulation that specifying formative constructs as reflective may lead to either Type I or Type II errors, as results of the structural model tend to be inflated or deflated, depending on the causality direction. Hence, the choice of whether to use formative or reflective measures greatly affects estimation procedures and this choice should be made at the preliminary stage of the model design. Table 1 summarizes the most important rules for researchers to follow in determining whether a construct should be formative or reflective, based on the detailed work of Jarvis et al. (2003) and Petter et al. (2007). The readers are referred to the original papers for further details. Table 1. Decision Criteria for Formative vs. Reflective Construct Identification Rule# Criterion Rule Description 1 Causality or the Causality is from the indicators to the construct. The causality nature of the theoretical nature relationships between a construct and its indicators can be established by of the relationships answering two questions: 1) Do items define (i.e., they are summation of) or between each reflect (i.e., they are manifestations of) the construct? If the indicators define construct and its the construct, the construct is formative. If the indicators are manifestations of measures. the construct, the construct is reflective. 2 The impacts of Formative observed variables, as their name implies, “cause” the construct changes in the (Gefen et al., 2000). Therefore, changes in formative measures influence the latent and observed formative construct, yet a change in the construct not necessarily impacts all variables its observed items. Reflective observed variables are reflections of the construct and, as a result, changes in the construct impacts all measurement items simultaneously. 4 Measurement items Formative measures may not be interchangeable, typically have different interchangeability themes, and dropping one might impede the content validity of the construct, or change it altogether. Reflective items, in contrast, are interchangeable, have a common theme, and dropping one of the measures does not change the meaning of the construct. 5 Measurement items Formative measures do not have to covary. Ideally, formative measures correlations. should not be highly correlated since multicollinearity (desirable for reflective measures) can weaken a formative construct. Reflective measures represent the same phenomenon (the reflective construct) and thus should be highly correlated therefore a change in the construct or in one item implies a change in all items. 6 Antecedents and Formative measures may define different aspects of the latent construct and consequences of thus it is not necessary for them to have the same reasons (antecedents) and the measurement consequences. On the other hand, it is inherent for reflective measures to be items. interchangeable and, therefore, antecedents and consequences of all reflective measures are expected to be the same. Since this work aims at presenting a methodological overview of formative model estimation, the next section relates only to this type of models. PLS Formative Model Estimation The mathematics underlying the PLS path model might be described from the perspective of two models: the measurement model and the structural model (Chatelin, Vinzi and Tenenhaus, 2002; Diamantopoulos, 2006; Tenenhaus, Vinzi, Chatelin and Lauro, 2005). The relations between constructs and their measures are described next for the measurement (outer) model, relating the measurement variables (MV) to their latent variables (LV), followed by a description of the second, the structural (inner) model, relating the LVs to each other. 4Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 Andreev et al. / Validating Formative Partial Least Squares (PLS) Models Relations between Constructs and their Measures in the Measurement Model Within the study of the measurement model, only formative LVs of two types will be observed: exogenous unobservable  , which are described by a block  of observable variables, and endogenous unobservable  , which are described by a block  of observable variables. The formative constructs  and  are supposed to be generated by their own MVs:  and  respectively. Consequently, the LVs:  (1) and  (2) are represented in the measurement model as a linear function of their respective MVs plus errors  and  respectively, where the error term is uncorrelated with the observed measures Diamantopoulos and Winklhofer, 2001:  ∑    ; (1);  ∑     (2) Relations between Constructs and Measures in the Structural Model The structural (inner) model depicts linear equations of relations among related LVs (3), where  is an endogenous dependent variable predicted by other endogenous  and exogenous  constructs that are causally connected Tenenhaus, 2008    ∑    ∑      (3) The number of structural equations equals the number of endogenous LVs, each of which should appear as a dependent variable in the individual equation. The only variables that stay independent in all equations are exogenous variables. The recursive causality model must be causally chained without loops. Evaluation of Pure Formative PLS Models The evaluation of PLS models should target the measurement (outer) model and the structural (inner) model. Evaluation of the Measurement (Outer) Model As in reflective models, evaluation of formative models includes assessing content validity, construct reliability, and construct validity. Content Validity Content validity is an important stage of model validation since wrong specification of the indicators could lead to forming a latent construct that does not have much in common with the explored content domain, and to biased estimation results. For formative constructs, content validity concerns whether the presented indicators capture the entire scope of the construct as described by the construct’s domain (Diamantopoulos and Winklhofer, 2001; Straub et al., 2004). Unlike reflective indicators, the error term in a formative structure has no measurement error but rather a disturbance term, which represents the remainder content of the construct domain unexplained by the presented indicators (Diamantopoulos, 2006). Therefore, it is essential to minimize the disturbance term by identifying a broad set of indicators that covers all aspects of the construct. To ensure content validity, Straub et al. (2004) proposed to conduct a thorough literature review related to the construct domain. They also mention the use of qualitative research methods as expert interviews, panel discussions, and Q-sorting, especially when the literature review does not lend support to the construct validity. Petter et al. (2007) proposed to make content validity a mandatory practice for evaluation of models with formative constructs. Unlike reflective measurements, the variance of the disturbance error cannot be related to the measurement error of the  or  MVs, but rather to theoretical deficiency in identifying the construct or to what is not included in the construct definition. Therefore, the error term is practically unrelated to the MVs hence, for all i, Cov(xi , ζ)=0, namely, the error term is uncorrelated with the observed measures (Diamantopoulos, 2006). A comprehensive discussion of the error term in formative constructs is brought by Diamantopoulos (2006), who maintains that in cases where the construct can be fully defined by its measures it is valid to assume that there is no error term. Nonetheless, he admits that such cases are quite unlikely, and suggests identifying the error term by various methods such as using a MIMIC model. For simplicity sake, however, we leave the discussion of the error for a future enhancement of this work, and proceed assuming the error to be zero. Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 5 Research Methods Construct Reliability Construct reliability concerns internal consistency of the measurement model (Straub et al., 2004). Examination of measurement properties for a formative construct could be performed by multicollinearity test, test of indicator validity (path coefficients significance), and optionally, if appropriate, test-retest (Petter et al., 2007). Theoretically, multicollinearity is desirable for reflective indicators. However, for formative models multicollinearity, due to substantial correlations between formative indicators, is undesirable (Diamantopoulos and Winklhofer, 2001). Multicollinearity does not affect the predictive effectiveness of the formative construct but may lead to estimation biases and instability of the indicators’ coefficients, which render problematic indicator validity leading to overall problematic construct reliability (Diamantopoulos and Winklhofer, 2001; MacKenzie et al., 2005). Diamantopoulos and Winklhofer (2001) pointed out that multicollinearity hinders separation of distinct indicators' impacts. From a theoretical perspective, multicollinearity means that specification of indicators was not accomplished successfully since formative indicators should represent distinctive aspects of the content domain and high covariance might mean that indicators explain the same aspect of the domain. One of the solutions to existing multicollinearity might be the elimination of a problematic indicator if there is another indicator that can describe the same aspect of the construct’s “universe” or the merging of such two items into one. The magnitude of multicollinearity might be assessed statistically by the variance inflation factor (VIF) and the tolerance which is the reciprocal of VIF. VIF could be calculated for each indicator  where y represents the calculated score of a formative construct:               . As a rule of thumb,   10 indicates absence of multicollinearity (Gefen et al., 2000). However, a more rigorous rule was proposed by Diamantopoulos and Siguaw (2006), according to which   3.3. indicator validity refers to the importance of each individual indicator of the related formative construct (Jahner, Leimeister, Knebel and Krcmar, 2008; MacKenzie et al., 2005), reflected by three aspects of the path coefficients  in the following equation  ∑     : 1) Significance of the path coefficient: path coefficients from the indicators to the construct should be statistically significant under a t-test. However, Diamantopoulos and Winklhofer (2001) advocated that even an insignificant indicator should be preserved in the item set capturing the construct since it may still represent some of the domain aspect, and not considering it may lead to construct specification problems, stating that ”Indicator elimination – by whatever means – should not be divorced from conceptual consideration when a formative measurement model is involved” (p. 273). Therefore, an insignificant indicator might be removed from the model if it is theoretically approved and its removal does not alter the conceptual meaning of the construct. Otherwise, it should remain within the construct or replaced by another indicator, which describes the same facet of the construct. A test for coefficient significance and calculation of t- statistics might be performed by applying the bootstrapping procedure. 2) The sign of the path coefficient has to be the same as theoretically hypothesized (Jahner et al., 2008)., and 3) Magnitude of the path coefficient: the desirable weights of the indicators should be significant, preferably not less than 0.1. A low path coefficient might indicate a wrongly specified indicator, implying that perhaps the construct should be split into two constructs or the construct model should be transformed into more advanced high-order formative model (Jahner et al., 2008). Construct Validity Unlike construct reliability, where the measurement within the construct is an issue, construct validity refers to the wider, out of the construct, validation of its measures (Straub et al., 2004). Construct validity is related to exposing if indicators of the construct indeed measure what they intend to from the perspective of relationships between constructs and between constructs and their relative indicators. One of the questions that may characterize construct validity is if a set of indicators as a whole covers the construct concept, in which case the behavior of the set might be seen as a behavior of one object with all indicators pointing in one direction regardless of circumstances. Construct validity for formative constructs could be assessed by discriminant validity, convergent validity, external validity and/or nomological validity. Discriminant validity is a statistical testing of an expected possibility to discriminate between different constructs. Discriminant validity assesses whether indicators of latent constructs that “theoretically should not be related to each other are, in fact, observed as not related to each other” (Trochim, 2006). MacKenzie et al. (2005) proposed an approach that is appropriate for evaluation of discriminant validity for both formative and reflective measures. The idea of the test lies in showing that inter-correlations of the model constructs are not high. For performing this test, all latent variables should be standardized. The rule of thumb for this test is that correlations between constructs 6Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 Andreev et al. / Validating Formative Partial Least Squares (PLS) Models should be under 0.71. Higher correlations indicate that the common variance of the two constructs is significantly more than 50%, which indicates a specification issue to be paid attention to. One of the solutions might be a joint construct or reconsidering the whole model structure. Loch et al. (2003) offered an alternative approach for testing discriminant validity for formative constructs based on using PLS weights, which are equivalent to the influence of formative indicators on their constructs. For performing this test all data should be standardized and transformed, following four steps. First, it is necessary to convert all measures to a common scale. Second, all the normalized measures should be multiplied by their PLS weights. Third, the retransformed indicators for each construct should be summed up to calculate the composite score of the construct. Fourth, a modified multitrait-multimethod (MTMM) matrix analysis (Campbell and Fiske, 1959; Trochim, 2006) should be employed, where the relations between weighted scores of the indicators, as well as the composite scores of the constructs, are examined for inter- indicator correlations and for indicator-to-construct correlations. In principle, discriminant validity is attained when correlations between weighted scores of the indicators and their respective construct is higher than their correlations with other formative constructs. Likewise, convergent validity is demonstrated by higher inter-indicator correlations for indicators forming the same construct than with other indicators. This condition, however, seems to be theoretically problematic and not always appropriate for formative constructs, as further discussed below. In sum, discriminant validity is substantiated when theoretical and empirical expectations regarding the size and sign of correlations are met, correlations between the constructs are under 0.71 and the indicator-to-construct condition is met. Convergent validity is a statistical testing of whether indicators of latent constructs that “theoretically should be related to each other are in fact observed to be related to each other” (Trochim, 2006). As stated above, Loch et al. (2003) offered to use modified MTMM for testing both discriminant and convergent validity. They stated that existence of all inter-indicator and indicator-to-construct significant correlations might be evidence of convergent validity of the construct. While true for reflective constructs, the inter-indicator condition is problematic when formative constructs are tested for convergent validity, since formative indicators may be positively or negatively correlated, or uncorrelated at all (Bollen, 1989; Bollen and Lennox, 1991. Hence, contrary to Loch et al. (2003), MacKenzie et al. (2005) expressed concerns regarding the relevance of assessing convergent validity for constructs with formative indicators. Although, in the absence of a consensus regarding convergent validity, most studies of formative constructs eliminate convergent validity from their validly procedure, this study proposes some common principles that could be applied for assessing convergent validity of formative indicators. It is however notable that there is no common rule for all cases and convergent validity might be assessed each time differently, depending on the specific type of indicators a construct is formed of. In terms of convergent validity, one needs to show that those measures that should be theoretically related are really related. The issue is how to define which, how, and to what extent the indicators should be theoretically related. First, several interrelations between indicators could be theoretically expected. Second, if inter-indicator correlation still takes place it should be low since it may lead to a multicollinearity problem. Third, since theoretically and empirically indicators forming a construct vary in the magnitude of their effect reflected by their weight, just those indicators with a statistically significant effect on the construct should be chosen for convergent validity. Testing for external validity calls to show the extent to which the formative indicators actually capture the construct (Chin, 1998b; Jahner et al., 2008). This, however, is not always theoretically possible, yet in some cases essential. There are three possible approaches assisting in model identification (Diamantopoulos et al., 2008; Diamantopoulos and Winklhofer, 2001). First, all indicators capturing one formative construct can be related to some variable that represents an overall index, like a summary or overall rating. Second, a Multiple Indicators and Multiple Causes (MIMIC) model (Jöreskog and Goldberger, 1975) might be applied for the model identification procedure, where both formative and at least two reflective indicators measure one construct (Diamantopoulos et al., 2008). However, this option is not always feasible since finding adequate reflective measures is challenging, and because the results of the formative construct depend on the nature of the reflective ones. Third, formative construct measures might be identified by linking the formative construct with two reflective constructs to which it is theoretically related (Diamantopoulos et al., 2008). This approach of external validity is also not always feasible. Moreover, if these two reflective constructs are not included in the structural theoretical nomological network of the model, it is not justifiable to include them for the sole purpose of model identification (Diamantopoulos et al., 2008). Nomological validity, using nomological networks, is another tool for establishing external validity. A nomological network (Campbell and Fiske, 1959) includes a theoretical framework of research objects, an empirical framework of how these objects will be measured, and specification of the relationships between these two frameworks. Nomological Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 7 Research Methods validity can be assessed by the same procedure for both formative and reflective indicators (Straub et al., 2004): First, a construct should be linked with its hypothesized antecedents and consequence constructs. Second, nomological validity is evidenced if the hypothesized linkages (structural paths) between the latent variables are found significantly greater than zero and their signs are in the expected causality direction. Evaluation of the structural (inner) model Only a handful of common recommendations with respect to evaluation of the PLS structural model emerged from the literature review, based in most cases on the suggestions of Chin (1998b). In addition to Chin’s suggestions, recommendations of Chatelin et al. (2002) and Tenenhaus et al. (2005), as well as of other authors, are brought next for evaluation of the explanatory power and predictive power of the structural model. Explanatory Power Explanatory power involves assessing R-square and exploring the effect size of the model constructs. At the first stage of evaluating the PLS structural model, the R-square value is calculated with PLS algorithms for each dependent LV. The interpretation of R2 obtained with the PLS model is the same as for multiple regression in terms of variance explained by the independent constructs compared to the total variance retrieved from the actual data. In addition to exploring R-square values, changes in R2, also known as effect size test can be explored to investigate the substantive impact of each independent construct on the dependent construct. This technique of determining changes in R2 was firstly presented by Cohen (1988). The strength of the substantive effect of an independent construct can be calculated as follows: &'()*+*  , &+-'()*+*  % = 1 , &'()*+*  Where &'()*+*  is the explained variance of the dependent construct, including the particular independent construct whose effect is investigated, &+-'()*+*  is accordingly the explained variance of the same dependent construct when the independent construct is removed from the model. Chin (1998b) stated that the effect size %  of PLS constructs, similar to Cohen’s implementation for multiple regression, might be small (%  0.02), medium(%  0.15), or large (%  0.35). Predictive Power Predictive power involves testing the significance of path coefficients in terms of contribution power and predictive relevance. The standardized path estimates, indicating magnitude of the impact of an independent construct on a dependent construct, interpreted in the same approach as the path coefficients in multiple regressions, could be tested by employing one of the re-sampling techniques as jackknifing or bootstrapping, preferably the latter (Chin, 1998b). Bootstrapping is a non-parametric method to examine the stability of a PLS path estimates based on resampling subsamples with replacement from the original sample. Bootstrapping produces a specific number of subsamples (r), defined by the researcher, containing the same number of observations (m – bootstrap sample size), which are randomly chosen from the original data (n – original sample size). The common recommended number of bootstrap subsamples (r) is at least 200. However, a higher number of repetitions provides more reliable results and Chin (1998b) recommends 500 bootstrap subsamples. There is no consensus regarding the size of the bootstrap sample (m). According to Chin (1998b), due to small sample sizes employed in PLS analyses, most studies tend to choose m = n, i.e., the bootstrap sample size (m) equal to the size of the original sample (n). In the majority of cases choice of m=n is justified since it allows capturing all the options presented by the original sample. Although the choice m=n works well in various applications, the answer to the question of whether the size of each bootstrap subsample has to match the size of the original sample is not necessarily positive. There is a camp of researchers and practitioners (Arcones, 2003; Bickel and Götze, 1997; Bickel and Sakov, 2008; Chernick, 2008; Chung and Lee, 2001) who argue that in some circumstances the optimal m could be less than n, especially, for large sample sizes. The bootstrapping procedure provides t-tests results for all path coefficients. The contribution power of each of the explanatory constructs can be substantiated by calculating the weighted effect of the independent construct on the dependent one using the following equation: &01 ∑8  2345 ,  7, where  is the standardized path coefficient between an independent construct i and a dependent construct j, and 2345 ,  7 is their respective correlations. If and only if the path coefficient and the 8Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 Andreev et al. / Validating Formative Partial Least Squares (PLS) Models related correlation have the same sign, this decomposition allows calculating the contribution of each explanatory construct in predicting the dependent construct. In addition, insignificant coefficients should be handled carefully to avoid wrong interpretation due to multicollinearity (Tenenhaus et al., 2005). The structural model can be tested for predictive relevance (the Stone–Geisser’s 9  test), employing the blindfolding procedure (Chin, 1998b). This can be done in three steps (Tenenhaus et al., 2005). First, the data should be divided into G blocks. The omission distance G should be a prime integer ranging from 5 to 10 (Chin, 1998b) with G = 7 recommended in the literature. Second, each of the data blocks is omitted from the sample in its turn. Third, PLS calculations are conducted G times, each time excluding one of the data blocks. The predictive measure ∑< ::;1< for a block j might be calculated as follows: 9  1,∑ where ∑@ >>?@ is the sum of squares of prediction < ::=1< errors for Block j and ∑@ >>A@ is the sum of squares of original data observations for Block j. Chin (1998b) stated that 9 reflects an index of goodness of reconstruction by model and parameter estimations. 9  B 0 provides evidence that omitted observations were well-reconstructed and reflects presence of predictive relevance, while negative 9  reflects absence of predictive relevance. Multi-group Analyses One of the recent directions considered in research is the incorporation of multi-group analyses into PLS models. Multi-group analysis, which allows comparison of parameter estimates across different groups, attracts attention from various research disciplines (Acedo and Jones, 2007; Jahner et al., 2008; Keil, Tan, Wei, Saarinen, Tuunainen and Wassenaar, 2000; Sánchez-Franco, 2006; Wullenweber and Weitzel, 2007). Two-sample (PLS) t-test assuming variances are not too different: Chin (http://disc- nt.cba.uh.edu/chin/plsfaq/multigroup.htm retrieved on 05.08.2008) has modified and adapted the formulas pertaining to two-sample t-test using multiple regression for PLS models. The final formula for the two-sample t-test with degrees of freedom CD E  F , G is: IJHK:LM(+_ , IJHK:LM(+_ H T Q , 1  R , 1 1 1 OP >?:LM(+_   >?:LM(+_  SP  Q  R , 2 Q  R , 2 Q R It is notable that the above formula (*) commonly appears with the (m-1) and (n-1) terms not powered, resulting in a large t value, which may lead to a type I error. Notwithstanding that several studies (Acedo and Jones, 2007; Jahner et al., 2008; Keil et al., 2000; Sánchez-Franco, 2006; Wullenweber and Weitzel, 2007) referred to Chin's corrected formula, the common practice is still implementation of the standard formula for multiple regression (where the degrees of freedom terms are not powered), hence we call for attention in this matter. Two-sample (PLS) t-test assuming different variances: The final formula for the two-sample t-test assuming different variances, where SE is the standard error is: IJHK:LM(+_ , IJHK:LM(+_ H TT P>?:LM(+_   >?:LM(+_  The only difference between the first (*) and the second (**) formulae is associated with expectations regarding the equality of variances of the two samples. However, as Chin noticed, for a considerably large sample the two approaches can give similar results if the variances of two samples are not too different. Summary of the Methodological Part We have methodologically outlined the theoretical and practical perspectives of estimating PLS formative models. These techniques are illustrated next by an empirical example. Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 9 Research Methods Empirical Illustration In order to illustrate the above methodology we use a model (Figure 1Error! Reference source not found.) which presents the effects of Information Technology (IT) resources and business resources on the business value of IT (BVIT). The model is used as an example, and is by no means a comprehensive theoretical representation of all the constructs that theoretically and practically affect BVIT. Clearly, at least business strategy and IT strategy theoretically affect BVIT (Henderson and Venkatraman, 1993), yet these are excluded from the present illustration for simplicity sake. Theoretical Framework The Resource Based View (RBV) of the firm (Barney, 1991b; Chamberlin, 1937; Melville, Kraemer and Gurbaxani, 2004; Penrose, 1959) and the Contingency theory (Drazin and Van de Ven, 1985; Fry and Smith, 1987; Schoonhoven, 1981; Tosi and Slocum, 1984) have been used as the theoretical framework for the model, as well as the Strategic Alignment Model (SAM) (Henderson and Venkatraman, 1993). Model Development As illustrated in Figure 1, three constructs comprise the firm-level exemplary model: BVIT – the dependent variable, and IT Resources (ITR) and Business Resources (BR) – the independent variables. All three constructs are posited to be formed by their respective indicators because of the following reasons: 1) The various items are first acquired by the organization and only then the construct is attained. Namely, business and IT resources are only attained after the organization puts in place adequate architecture, skills, processes, etc. Likewise, business value is acquired when the organization realizes the benefits accrued by these resources. Hence, the causality should point from the indicators to the construct. 2) The various items stem from different nomological antecedents and consequences. For example, processes and strategy are two distinctive domains that form BVIT, and resources should include tangible and intangible resources, human capital, etc. all representing distinctive domains. 3) Most of the resources are not expected to strongly correlate. For instance, quite often tangible resources are quite strong whereas intangible ones, or human capital related resources, are not as strong. Similarly, IT can contribute to business value by improving processes, yet insufficiently affect strategy, for example when IT and business strategies are not well aligned while resources are (Henderson and Venkatraman, 1993). 4) A change in the indicators is expected to cause a change in the construct rather than the other way around. For example, strengthening IT personnel technical skills would improve the overall quality of IT resource yet unlikely change in IT resources not necessarily implies a change in all items, finally 5) An omission of an indicator would change the nature of the construct. For instance, omitting intangible capital and human capital from the IT resources construct entails changing the terminology from 'overall IT resources' to 'IT physical (or tangible) resources'. It is hypothesized that both independent constructs positively affect BVIT (denoted H1 for the effect of BR on BVIT, and H2 for the effect of ITR). The model does not include an error term although it is quite unlikely that the proposed indicators fully form the constructs. Nonetheless, in light of the complexity of dealing with the error or disturbance term (Diamantopoulos, 2006), we leave the error out for simplicity sake, yet further discuss it as a limitation later on. Research Methodology The research has been designed as a confirmatory study aimed, among other objectives, to elucidate scales for measuring the IT-Resources, Business-Resources and BVIT constructs, measure their posited relationships, and assess the effect of various alignment perspectives (Henderson and Venkatraman, 1993). A literature survey has served as a source of identification of the content domains of each construct based on the theoretical frameworks underlying the proposed model. Items were created and refined by means of several rounds of qualitative testing by academics and practitioners, and then pilot-tested on a group of thirty practitioners. The final questionnaire which contained 7-point Likert scale items as well as demographic questions, was administered between January and June 2007 to about 500 IT and business executives in medium to large local enterprises spanning all industries. 400 questionnaires were obtained of which 386 were usable. Data was analyzed using SPSS 15.0 and SmartPLS (Ringle, Wende and Will, 2005). 10Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 Andreev et al. / Validating Formative Partial Least Squares (PLS) Models Model Evaluation Evaluation of the measurement model Content validity: The BVIT model contains three formative constructs. To establish the content validity of these constructs, a thorough literature review was conducted with respect to different aspects of the constructs. An elaborate presentation of how the content validity has been established could not be brought here due to paper length constraint, since it requires delving into theoretical analysis and synthesis. In principle, we have analyzed the worlds of content of each of the three variables, briefly presented next. BVIT is the IT-based value an organization accrues. This value is multifaceted and is formed by the effect of IT on the organization's processes and strategy (Henderson and Venkatraman, 1993). Thus, IT can affect the organization's processes by automating processes, informating process stakeholders, coordinating processes (Bharadwaj, 2000; Kohli and Grover, 2008; Radhakrishnan, Zu and Grover, 2008; Weill and Vitale, 1999), and by supporting transformation of these processes in response to change (Eisenhardt and Martin, 2000; Teece, 1997). Under the contingency theory and SAM, IT should participate in shaping business strategy (Brynjolfsson and Hitt, 2000), which, in order for superior performance to be sustained, should be scalable to facilitate growth (Christensen, 2000; Dehning and Stratopoulos, 2003; Ross, Beath and Goodhue, 1996)2. Theoretically, these six components comprise the full content of BVIT. The Business-Resources construct is formed by indicators representing business infrastructures, processes, and human capital. Business infrastructures include tangible capital (Barney, 1991a; Henderson and Venkatraman, 1993; Melville et al., 2004), intangible capital such as deeply embedded culture of constant improvement, and tacit knowledge (Eisenhardt and Martin, 2000). Organizational human capital encompasses professional experience, education, competencies, and commitments of the labor and management forces (Henderson and Venkatraman, 1993; Luftman, 1993), and Processes reflect the methods by which resources are utilized in order to transform inputs to outputs (Barney, 1991a; Henderson and Venkatraman, 1993; Luftman, 1993; Melville et al., 2004). It is posited that these domains comprise the majority of this construct's content. Under assumptions of the Contingency theory and SAM, business and IT resource should be aligned in order to maximize BVIT hence these two constructs are expected to positively correlate in the proposed BVIT context. The IT-Resources construct includes technological infrastructure, processes, and human capital (Melville et al., 2004; Ross et al., 1996). While the physical part of IT infrastructure has become a commodity and hence cannot render competitive edge as such (Carr, 2003), it can nonetheless create unique resources by providing services that are difficult to imitate such as data integration across organization-wide systems, eliminating ineffective silo architecture (Ross et al., 1996). Similarly, IT processes can be interpreted as valuable resources when they are agile, adaptable, and adequate in the sense that they facilitate and support process re-design in response to environmental changes (Barua, Kriebel and Mukhopadhyay, 1995; Tallon, 2000). IT skills, comprised of technical, managerial and business-relations skills, are likewise considered a paramount IT resource (Bharadwaj, 2000; Feeny, 1998; Mata, Fuerst and Barney, 1995; Melville et al., 2004; Ross et al., 1996). These resources encompass a substantial part of the IT-Resources construct. Thus, content validity for the three constructs is established based on theoretical considerations. Construct reliability: Construct reliability is established by indicator validity and absence of multicollinearity. SmartPLS with bootstrapping (m=386, n=386) has been used to obtain indicator weights on their respective constructs, and t-test values for path significance (Figure 1). Three OLS regressions were run using SPSS with the PLS construct scores as dependent variables and the indicators as independent variables for each construct to obtain VIF scores for the multicollinearity test. VIF values ranged from 1.049 to 2.352, showing that all tested formative latent variables met the requirements of indicator validity and were thus considered appropriate. It should be noted, however, that the 'Transform' indicator forming BVIT emerged as insignificant. Construct validity: Discriminant validity was tested using the approach offered by Loch et al. (2003). For assessing discriminant validity, the standardized scores of all indicators (from SPSS), as well as the standardized weights of the three latent variables (from the PLS estimations) were obtained. Standardized measures of the variables were 2 Items used to measure each perspective were omitted due to length constraints and can be obtained from the authors Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 11 Research Methods multiplied by their weights, and new composite constructs were calculated by summing up the retransformed indicators. Correlations between the weighted variables and the composite scores were run using SPSS, creating the MTMM matrix (Table 2). The MTMM analysis showed discriminant validity for all three formative constructs. All individual indicators were found more correlated with their own constructs than with other constructs. Convergent validity was assessed using the same MTMM matrix (Table 2), showing that convergent validity was achieved for all constructs, as inter-indicator correlations behaved as theoretically expected3. Nomological validity was achieved for all three constructs by means of drawing the nomological net, which cannot be elaborated here. Thus, by means of content validity, construct reliability, and construct validity, it was shown that the measurement model is appropriate and valid. These findings pave the way to evaluating the structural model next. Figure 1: The BVIT Model ***p < 0.001. **p < 0.01. *p < 0.05 (based on t(499), two-tailed test). Evaluation of the structural model Results of the structural model evaluation are presented in Figure 1Error! Reference source not found..The central criterion for evaluating the structural model is the level of explained variance of the dependent construct BVIT, for which the R-square was 0.530. Thus, the model explained 53% of the construct's variance. Likewise, all structural path coefficients are greater than 0.2 Chin, 1998a. Change in R-square was explored to investigate the impact of each independent construct on the dependent construct, carrying out the effect size technique by rerunning two PLS estimations, where one independent construct was excluded in each run. The results show that ITR has medium effect on BVIT (DG U. GV) while the effect of BR is small DG U. UW. The statistical significance of the path coefficients was tested by employing the bootstrapping resampling technique, using the SmartPLS software, with the computational results presented in Figure 1. BVIT was found to be positively affected by BR (H1 supported with X U. GYZ, p < 0.001), and positively by ITR (H2 supported with  0.528, p < 0.001). Tenenhaus et al. (2005) stated that if both path coefficients and related correlations have the same sign, it is possible to calculate the contribution power of each explanatory construct in predicting the dependent construct. The contribution test of the 3 Items representing similar concepts were expected to correlate more than items representing non-similar concepts, yet detailed explanation cannot be presented due to paper length constraint. 12Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 Andreev et al. / Validating Formative Partial Least Squares (PLS) Models BVIT model revealed that ITR dominate in prediction of BVIT (with contribution of 67.9%) while contribution of BR was 32.1%. For the evaluation of the predictive relevance of the structural model, the Stone and Geisser 9  test was performed using the blindfolding procedure. The blindfolding test, which was conducted with omission distance equals to 7 (the recommended number), revealed that all values of 9  were greater than zero (BVIT - 0.354; BR - 0.164, ITR – 0.238). Positive 9  values provide evidence that the omitted observations were well-reconstructed and that predictive relevance is achieved. Multi-Group Analysis A multi-group analysis was employed to assess potential differences in the impact of ITR and BR on BVIT between IT and Business managers. Based on t-tests, no statistically significant differences between the path coefficients were evidenced (tITR=0.937, tBR=0.740), meaning the two groups did not significantly differ in conceptualizing these effects. Nonetheless, differences were found in the weights exerted by the measurement items on the various constructs, the discussion of which is deferred to another work. Discussion and Conclusions This paper has attempted to put together the disparate and inconsistent extant knowledge pertaining to formative constructs, their identification, specification, and evaluation. The methodological overview is empirically illustrated by actually estimating a formative model using SmartPLS. Although recent works (e.g. Petter et al., 2007) have addressed the issue of formative versus reflective constructs, as far as we know no prior study has presented the full process of evaluating both the measurement and structural formative models. We also find the illustrative example quite instrumental. Before proceeding to the conclusions and suggestions for future work, several limitations should be noted. First, the issue of disregarding the disturbance or error item is perhaps the most significant limitation of this work. Nonetheless, readers are referred to recent works that shed light on, and offered theoretical and practical guidelines for, specifying formative constructs when the variance of the disturbance term cannot be assumed as zero (Diamantopoulos, 2006; Wilcox et al., 2008). Second, the model used for the empirical illustration composed of three constructs the content of which is quite complex, putting their content validity at question. Nonetheless, since the suggested indicators clearly obey the criteria for formative identification, and since the results are fairly clean, we find it instrumental for clarifying and demonstrating the proposed methodology. In sum, although we are aware of researchers calling to re-think the overall validity of formative constructs (Howell et al., 2007), the majority of work dealing with this issue maintains that formative constructs are more appropriate in certain cases (Diamantopoulos and Siguaw, 2006; Petter et al., 2007), while others actually used such constructs in research models Collier and Bienstock, 2006; Kuan and Bock, 2007; Ma and Agarwal, 2007; Parboteeah, Valacich and Wells, 2009; Pavlou and Dimoka, 2006; Pavlou and El Sawy, 2006; Pavlou and Fygenson, 2006; Pavlou, Liang and Xue, 2007), and have theoretical and practical merit. Future work should culminate in a rigorous set of guidelines pertaining to formative models, including more accurate techniques to deal with the error term, as well as with the complex issue of convergent validity, contributing to more appropriate use of formative constructs. Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 13 Table 2: Multitrait-multimethod matrix (MTMM) Analysis Coordinate Transform Automate Informate IntKnow Tangible TechSk BusPro BusRel ManSk IntCult Shape Scale ITR ITP Infr HC BR Tangible 1 IntCult.398** 1 IntKnow.210**.123* 1 BusPro.689**.410**.193** 1 HC.465**.487**.199**.467** 1 BR4.775**.754**.424**.763**.767** 1 Infr.320**.332**.265**.358**.384**.470** 1 ITP.348**.316**.154**.288**.326**.414**.279** 1 TechSk.137**.148** 0.019.109*.183**.178**.199** 0.032 1 ManSk.589**.352**.291**.657**.471**.655**.373**.266**.256** 1 BusRel.422**.210**.140**.409**.406**.445**.236**.139**.230**.460** 1 ITR.606**.446**.285**.602**.562**.521**.578**.709**.395**.775**.584** 1 Automate.399**.334**.179**.440**.384**.494**.266**.317**.052.387**.252**.441** 1 Informate.329**.291**.273**.322**.334**.436**.433**.323**.102*.319**.276**.462**.233** 1 Coordinate.385**.285**.205**.367**.295**.433**.303**.207**.191**.463**.341**.469**.281**.229** 1 Transform.356**.361**.202**.403**.342**.475**.289**.236** 0.007.378**.182**.375**.438**.212**.297** 1 Shape.363**.352**.232**.261**.349**.449**.505**.508**.160**.338**.252**.582**.220**.384**.304**.188** 1 Scale.389**.270**.378**.376**.324**.477**.338**.285**.107*.419**.285**.469**.422**.238**.340**.326**.304** 1 BVIT.566**.455**.395**.549**.502**.573**.526**.455**.176**.597**.424**.682**.657**.556**.687**.515**.567**.779** 4 LVs are bolded Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 14 References Acedo, F.J., and Jones, M.V. "Speed of internationalization and entrepreneurial cognition: Insights and a comparison between international new ventures, exporters and domestic firms," Journal of World Business (42:3) 2007, pp 236-252. Anderson, J.C., and Gerbing, D.W. "Structural equation modeling in practice: a review and recommended two step approach," Psychological Bulletin (103:3) 1988, pp 411-423. Arcones, M. "On the asymptotic accuracy of the bootstrap under arbitrary resampling size," Annals of the Institute of Statistical Mathematics (55:3) 2003, pp 563-583. Barney, J.B. "Firm resources and sustained competitive advantage," Journal of management (17:1) 1991a, pp 99- 120. Barney, J.B. "The Resource Based View of Strategy: Origins, Implications, and Prospects," Editor of Special Theory Forum in Journal of Management (17) 1991b, pp 97-211. Barua, A., Kriebel, C.H., and Mukhopadhyay, T. "Information Technologies and Business Value: An Analytic and Empirical Investigation," Information Systems Research (6:1) 1995, pp 3-23. Bharadwaj, A.S. "A Resource-Based Perspective on Information Technology Capability and Firm Performance: An Empirical Investigation," MIS quarterly (24:1) 2000, pp 169-196. Bickel, P.J., and Götze, F. "Resampling fewer than n observations: gains, losses and remedies for losses," Statist. Sinica (7) 1997, pp 1-31 Bickel, P.J., and Sakov, A. "On the choice of m in the m Out of n bootstrap and confidence bounds for extrema," Statistica Sinica (18:3) 2008, pp 967-985. Bollen, K.A. Structural Equations with Latent Variables John Wiley, New York, 1989. Bollen, K.A., and Lennox, R. "Conventional wisdom on measurement: A structural equation perspective," Psychological Bulletin (110) 1991, pp 305-314. Brynjolfsson, E., and Hitt, L. "Beyond Computation: Information Technology, Organizational Transformation and Business Performance," Journal of Economic Perspectives (14:4) 2000, pp 23-48. Campbell, D.T., and Fiske, D.W. "Convergent and discriminant validation by the multitrait-multimethod matrix," Psychological Bulletin (56:2) 1959, pp 81-105. Carr, N.G. "IT doesn't matter," Harvard Business Review (81:5) 2003, pp 41-+. Chamberlin, E.H. "Monopolistic or Imperfect Competition," Quarterly Journal of Economics (51:4) 1937, pp 557- 580. Chatelin, Y.M., Vinzi, V.E., and Tenenhaus, M. "State-of-Art on PLS Path Modeling through the Available Software," Les Cahiers de Recherche, 764, Groupe HEC) 2002. Chernick, M.R. Bootstrap methods :a guide for practitioners and researchers, 2008. Chin, W.W. "Issues and Opinion on Structural Equation Modeling," MIS Quarterly (22:1) 1998a, pp vii-xvi. Chin, W.W. "The Partial Least Squares Approach to Structural Equation Modeling," in: Modern Methods for Business Research M. G.E. (ed.), Lawrence Erlbaum Associates, Mahwah, New Jersey, 1998b, pp. 295- 336. Chin, W.W., and Newsted, P.R. Structural Equation Modeling Analysis with Small Samples Using Partial Least Squares Sage Publications, Thousand Oaks, CA, 1999. Christensen, C.M., and Overdorf, M. "Meeting the Challenge of Disruptive Change," Harvard Business Review (78:2) 2000, pp 67-75. Chung, K.-H., and Lee, S.M.S. "Optimal Bootstrap Sample Size in Construction of Percentile Confidence Bounds," Scandinavian Journal of Statistics (28:1) 2001, pp 225-239. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, Hillsdale, NJ, 1988. Collier, J.E., and Bienstock, C.C. "Measuring Service Quality in E-Retailing," Journal of Service Research (8:3), February 1, 2006 2006, pp 260-275. Coltman, T., Devinney, T.M., Midgley, D.F., and Venaik, S. "Formative versus reflective measurement models: Two applications of formative measurement," Journal of Business Research (61:12) 2008, pp 1250-1262. Dehning, B., and Stratopoulos, T. "Determinants of a sustainable competitive advantage due to an IT-enabled strategy," Journal of Strategic Information Systems (12:1) 2003, pp 7-28. Diamantopoulos, A. "The error term in formative measurement models: interpretation and modeling implications," J Model Manage (1 1) 2006, pp 7–17. Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 15 Research Methods Diamantopoulos, A., Riefler, P., and Roth, K.P. "Advancing formative measurement models," Journal of Business Research (61:12) 2008, pp 1203-1218. Diamantopoulos, A., and Siguaw, J.A. "Formative Versus Reflective Indicators in Organizational Measure Development: A Comparison and Empirical Illustration," British Journal of Management (17:4) 2006, pp 263-282. Diamantopoulos, A., and Winklhofer, H.M. "Index Construction with Formative Indicators: An Alternative to Scale Development," Journal of Marketing Research (38 No. 2) 2001, pp 269-277. Drazin, R., and Van de Ven, A. "Alternative Forms of Fit in Contingency Theory," Administrative Science Quarterly (30:4) 1985, pp 514-539. Eisenhardt, K.M., and Martin, J.A. "Dynamic Capabilities: What Are They?," Strategic Management Journal (21:10-11) 2000, pp 1105-1121. Feeny, D.F., and Willcocks, L.P. "Core IS Capabilities for Exploiting Information Technology," Sloan Management Review (39:3) 1998, pp 9-21. Foulds, L.R., Quaddus, M., and West, M. "Structural equation modelling of large-scale information system application development productivity: the Hong Kong experience," 6th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2007), Melbourne, Australia, 2007, pp. 724-731. Fry, L.W., and Smith, D.A. "Congruence, contingency, and theory building," Academy of Management Review (12:1) 1987, pp 117-132. Gefen, D., Straub, D.W., and Boudreau, M.-C. "Structural Equation Modeling and Regression: Guidelines for Research and Practice," CAIS (4:7) 2000, pp 1-70. Golob, T.F. "Structural Equation Modeling for Travel Behavior Research," Center for Activity Systems Analysis Institute of Transportation Studies University of California, Irvine; Irvine, CA 92697-3600, U.S.A., p. 35. Henderson, J.C., and Venkatraman, N. "Strategic Alignment: Leveraging Information Technology for Transforming Organizations," IBM Systems Journal (32:1) 1993, pp 4-16. Howell, R.D., Breivik, E., and Wilcox, J.B. "Reconsidering formative measurement," Psychological Methods (12:2) 2007, pp 205-218. Jahner, S., Leimeister, J.M., Knebel, U., and Krcmar, H. "A Cross-Cultural Comparison of Perceived Strategic Importance of RFID for CIOs in Germany and Italy," in: Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences, IEEE Computer Society, 2008. Jarvis, C.B., MacKenzie, S.B., and Podsakoff, P.M. "A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research," Journal of Adolescence of Consumer Research (30) 2003, pp 199-218. Jöreskog, K., and Goldberger, A.S. "Estimation of a Model with Multiple Indicators and Multiple Causes of a Single Latent Variable," Journal of the American Statistical Association ( 70 (September)) 1975, pp 631-639. Keil, M., Tan, B.C.Y., Wei, K.-K., Saarinen, T., Tuunainen, V., and Wassenaar, A. "A cross-cultural study on escalation of commitment behavior in software projects," MIS Q. (24:2) 2000, pp 299-325. Kohli, R., and Grover, V. "Business Value of IT: An Essay on Expanding Research Directions to Keep up with the Times," Journal of the association for Information Systems (9:1) 2008, pp 23-39. Kuan, H.-H., and Bock, G.-W. "Trust transference in brick and click retailers: An investigation of the before-online- visit phase," Information & Management (44:2) 2007, pp 175-187. Loch, K.D., Straub, D.W., and Kamel, S. "Diffusing the Internet in the Arab world: the role of social norms and technological culturation," IEEE Transactions on Engineering Management (50:1) 2003, pp 45-63. Luftman, J.N., Lewis, P.R., and Oldach, S.H. "Transforming the Enterprise: The alignment of business and information technology strategies," IBM Systems Journal (32:1) 1993. Ma, M., and Agarwal, R. "Through a glass darkly: information technology design, identity verification, and knowledge contribution in online communities," Information Systems Research (18:1) 2007, p 42(26). MacKenzie, S.B., Podsakoff, P.M., and Jarvis, C.B. "The Problem of Measurement Model Misspecification in Behavioral and Organizational Research and Some Recommended Solutions," Journal of Applied Psychology (90:4) 2005, pp 710-730. Mata, F.J., Fuerst, W.L., and Barney, J.B. "Information Technology and Sustained Competitive Advantage: A Resource-based Analysis," MIS quarterly (19:4) 1995, pp 487-505. Melville, N., Kraemer, K., and Gurbaxani, V. "Review: Information Technology and Organizational Performance: An Integrative Model of IT Business Value," MIS quarterly (28:2) 2004, pp 283-322. Parboteeah, D.V., Valacich, J.S., and Wells, J.D. "The influence of website characteristics on a consumer's urge to buy impulsively.(Technical report)," Information Systems Research (20:1) 2009, p 60(19). 16Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 Validating Formative Partial Least Squares (PLS) Models: Methodological Overview and Empirical Illustration (Note: Leave this header as is [blank] during the review process) Pavlou, P.A., and Dimoka, A. "The nature and role of feedback text comments in online marketplaces: Implications for trust building, price premiums, and seller differentiation," Information Systems Research (17:4) 2006, p 392. Pavlou, P.A., and El Sawy, O.A. "From IT leveraging competence to competitive advantage in turbulent environments: the case of new product development.(information technology)," Information Systems Research (17:3) 2006, p 198(130). Pavlou, P.A., and Fygenson, M. "Understanding AND Predicting Electronic 3 Commerce Adoption: An Extension OF THE 4 Theory OF Planned Behavior," MIS Quarterly (30:1) 2006, pp 115-143. Pavlou, P.A., Liang, H., and Xue, Y. "Understanding AND Mitigating Uncertainty IN Online Exchange Relationships: AP RINCIPAL–Agent Perspective," MIS Quarterly (31:1) 2007, pp 105-136. Penrose, E.T. The Theory of the Growth of the Firm Wiley, New York, 1959. Petter, S., Straub, D., and Rai, A. "Specifying formative constructs in information systems research," MIS Quarterly (31:4) 2007, pp 623-656. Radhakrishnan, A., Zu, X., and Grover, V. "A process-oriented perspective on differential business value creation by information technology: An empirical investigation," Omega (36:6) 2008, pp 1105-1125. Ringle, C.M., Wende, S., and Will, A. "SmartPLS 2.0 (M3) Beta," in: Universitat Hamburg, Hamburg, 2005. Ross, J.W., Beath, C.M., and Goodhue, D.L. "Develop Long-Term Competitiveness through IT Assets," Sloan Management Review (38:1) 1996, pp 31-42. Sánchez-Franco, M.J. "Exploring the influence of gender on the web usage via partial least squares," Behaviour & Information Technology (25:1) 2006, pp 19 - 36. Schoonhoven, C., B. "Problems with Contingency Theory: Testing Assumptions Hidden within the Language of Contingency "Theory"," Administrative Science Quarterly (26:3) 1981, pp 77-349 Straub, D., Boudreau, M.C., and Gefen, D. "Validation guidelines for IS positivist research," Comm. Association Inform. Systems (v13) 2004, pp 380-427. Tallon, P.P., Kraemer, K.L. and Gurbaxani, V. "Executives' Perceptions of the Business Value of Information Technology: A Process-Oriented Approach," Journal of Management Information Systems (16:4) 2000, pp 145–174. Teece, D.J., Pisano, G., and Shuen, A. "Dynamic Capabilities and Strategic Management," Strategic Management Journal (18:7) 1997, pp 509-533. Tenenhaus, M. "Component-based Structural Equation Modelling," Total Quality Management & Business Excellence (19:7) 2008, pp 871 - 886. Tenenhaus, M., Vinzi, V.E., Chatelin, Y.-M., and Lauro, C. "PLS path modeling," Computational Statistics & Data Analysis (48:1) 2005, pp 159-205. Tosi, H.L., and Slocum, J.W. "Contingency Theory: Some Suggested Directions " Journal of Management (10:1) 1984, pp 9-26 Trochim, W.M.K. "Convergent & Discriminant Validity," last updated 20/10/2006, accessed May 4, 2009, http://www.socialresearchmethods.net/kb/convdisc.php Weill, P., and Vitale, M. "Assessing the Health of an Information Systems Applications Portfolio: An Example From Process Manufacturing," MIS quarterly (23:4) 1999, pp 601-624. Wilcox, J.B., Howell, R.D., and Breivik, E. "Questions about formative measurement," Journal of Business Research (In Press, Corrected Proof) 2008. Wullenweber, K., and Weitzel, T. "An Empirical Exploration of How Process Standardization Reduces Outsourcing Risks," in: Proceedings of the 40th Annual Hawaii International Conference on System Sciences, IEEE Computer Society, 2007. Thirtieth International Conference on Information Systems, Phoenix, Arizona 2009 17 View publication stats

Use Quizgecko on...
Browser
Browser