Review of Psychological Assessment Practice in the Philippines (2020) PDF
Document Details
Uploaded by MarvellousMaxwell
De La Salle University Araneta
2020
Maria Caridad H. Tarroja, Ma. Araceli B. Alcala, Patricia D. Simon, Jeffrey D. Sanchez
Tags
Summary
This paper reviews the practice of psychological assessment in the Philippines, using a framework of evidence-based psychological assessment. Key findings include the need for more supervisor support, continuous training, and the development of local tools. The study includes quantitative and qualitative data, and examines how current practices align with evidence-based practice.
Full Transcript
Philippine Journal of Psychology, 2020, 53, 81-115. doi: 10.31710/pjp/0053.04 Copyright @ 2020 Psychological Association of the Philippines A Review of Psychological Assessment Practice in the Philippines: What Do Some Practitioners Say? Maria Ca...
Philippine Journal of Psychology, 2020, 53, 81-115. doi: 10.31710/pjp/0053.04 Copyright @ 2020 Psychological Association of the Philippines A Review of Psychological Assessment Practice in the Philippines: What Do Some Practitioners Say? Maria Caridad H. Tarroja Ma. Araceli B. Alcala Patricia D. Simon Jeffrey D. Sanchez De La Salle University - Manila This paper reviews the practice of psychological assessment in the Philippines guided by Bornstein’s (2017) framework of evidence- based psychological assessment (EBPA). One hundred fifty-one (N = 151) respondents, majority of whom are registered psychometricians, answered a survey on their current practices, knowledge, skills, and attitudes. Part 1 of the study presented practitioners’ training and practice, and their knowledge, skills, and perceptions about psychological assessment. Part 2 considered current assessment practices, from test selection to delivering test results. Thematic analysis was conducted on respondents’ answers to open-ended questions in the survey to explore the challenges, best practices, and needs of practitioners. Despite the limitations of the sample, the initial findings of this survey provide baseline data on how some assessment practitioners perceive the state of the practice in the country. Given limited experiences in assessment, majority of the respondents acknowledged the need for a supervisor, continuous training and development, standardization of the assessment process, and development of local tools. Both the quantitative and qualitative data show that some of the current practices in assessment are aligned with EBPA components although there is still a need for practitioners to adhere more consistently to an evidence-based practice in assessment. Keywords: assessment practitioners, registered psychometricians, psychological assessment, psychological tests Correspondence concerning this article can be addressed to Maria Caridad Tarroja. Email: [email protected] 82 Review of Psychological Assessment The International Declaration on Core Competences in Professional Psychology (IPCP, 2016) identifies the conduct of psychological assessments and evaluations as among the competencies of practicing psychologists, alongside setting relevant goals, conducting psychological interventions, and communicating effectively and appropriately. Following the guidelines of the American Psychological Association (APA), Bornstein (2017) defined proficiency in assessment as the ability of practitioners to do the following: evaluate the construct validity of psychological assessment tools, construct an assessment battery, administer and score individual measures, interpret the results of these measures, integrate data from different instruments, and communicate assessment findings. These standards invite psychological assessment practitioners to review and reflect on their assessment practices. Previous surveys have been conducted on practices pertaining to psychotherapy and counseling in the Philippines (e.g., Tarroja, Catipon, Dey, & Garcia, 2013; Teh, 2003; Tuason, Fernandez, Catipon, Dey, & Carandang, 2012), but there has yet to be a review of the current practice of psychological assessment. Assessment is a key activity of psychology practitioners. Thus, upholding standards in the practice is critical for two reasons: first, psychological assessment assists in making important decisions that can have implications in the daily lives of people who seek it (e.g., formulating treatment, educational interventions, or employment decisions); second, it lends more credence to the professionalization of psychology in the Philippines. Hence, this study looks into the current assessment practices of Filipino assessment psychology practitioners and how these practices meet the standards of an evidence-based psychology practice. Research shows that while psychologists may be aware of evidence- based practices and theoretical frameworks that guide the assessment process, they may not always adhere to these in actual settings. Some modify practices to suit the culturally and linguistically diverse populations they serve (Sotelo-Dynega & Dixon, 2014), or to respond to barriers encountered (e.g., respondents not understanding items; clinicians’ lack of access to tests they need). Likewise, practitioners may disagree about the theoretical framework to use or the specific instruments to utilize in order to diagnose a disorder (Meteyard & Tarroja, Alcala, Simon, & Sanchez 83 Gilmore, 2015). The following section reviews existing documents and literature on the practice of psychological assessment, such as its legal definition and issues faced by practitioners. The issues are further categorized in terms of the training of practitioners, tools used, process of assessment, and stakeholders in the assessment process. Defining Psychological Assessment and Psychological Testing While there is a clear distinction between psychological assessment and psychological testing as described in the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014), “the semantic distinction between psychological testing and psychological assessment is blurred in everyday conversation” (Cohen & Swerdik, 2009, p.13). This blurred distinction is also observed among assessment practitioners as the terms are often used interchangeably. Conceptually, psychological assessment is defined as “the gathering and integration of psychology-related data for the purpose of making a psychological evaluation that is accomplished through the use of tools such as tests, interviews, case studies, behavioral observation, and specially designed apparatuses and measurement procedures,” whereas psychological testing is “the process of measuring psychology- related variables by means of devices or procedures designed to obtain a sample of behavior” (Cohen & Swerdik, 2009, p. 14). These definitions may be perceived as overlapping and related, and hence may result in confusion even among practitioners. It is therefore important to understand the practice of Filipino assessment practitioners vis-à-vis the current legal definition and practitioners’ interpretation of the terminology. The Psychology Act (2010) defines psychological assessment as:...gathering and integration of psychology-related data for the purpose of making a psychological evaluation, accomplished through a variety of tools, including individual tests, projective tests, clinical interview and other psychological assessment tools, 84 Review of Psychological Assessment for the purpose of assessing diverse psychological functions including cognitive abilities, aptitudes, personality characteristics, attitudes, values, interests, emotions and motivations, among others, in support of psychological counseling, psychotherapy and other psychological interventions. On the other hand, the Guidance and Counseling Act (2004) refers to psychological testing as a function of registered guidance counselors. Members of the Philippine Regulatory Board for Guidance and Counseling noted that the term “psychological testing” is dated and is interpreted in the guidance and counseling profession to be the same as psychological assessment (E. Morada, personal communication, July 11, 2020; C. Pabiton, personal communication, July 12, 2020). Given this, the practice of assessment surveyed in this paper includes both psychological assessment and psychological testing. For consistency and alignment with international standards, the term psychological assessment is used throughout the paper. Psychological assessment is an important function of psychologists in different fields as professionals use them to make important decisions, diagnosis and treatment plans, recommendations for hiring, promotion, and training, educational placement and interventions. Competences and Training of Assessment Practitioners One of the issues in the practice of assessment relates to whether the training of practitioners is aligned with the expected competences. Krishnamurthy et al. (2004) identified eight core competencies in the practice of psychological assessment. These include: (1) background in the basics of psychometric theory; (2) knowledge of the scientific, theoretical, empirical, and contextual bases of psychological assessment; (3) knowledge, skills, and techniques to assess the cognitive, affective, behavioral, and personality dimensions; (4) the ability to assess the outcomes of treatments and interventions; (5) the ability to evaluate the prospective roles played by both clients and psychologists and the impacts of such roles; (6) ability to establish and maintain collaborative professional relationships; (7) an understanding of the relationship between assessment and intervention; and (8) Tarroja, Alcala, Simon, & Sanchez 85 possession of technical assessment skills. There is no existing local literature that looks into how assessment practitioners are trained in each of these areas. In the United Kingdom, a survey of clinical psychologists found that most practitioners learned by doing and observing others in clinical practice (Nel, Pezzolesi, & Stott, 2012). However, there also seems to be a dearth of adequate supervision that could guide the learning process. Hence, it appears that there needs to be more research on specific learning activities that facilitate the acquisition of competencies among practitioners. Tools Used in Psychological Assessment Much of the existing literature on psychological assessment delves into the tools practitioners use (Archer, Buffington-Vollum, Stredny, & Handel, 2006; Piotrowski,1999; Wright et al., 2017); knowledge, attitudes, and skills required to utilize tests; as well as training, practices, and challenges experienced by practitioners in using assessment tools (Meteyard & Gilmore, 2015). Psychologists in different parts of the world are now more aware of using evidence- based practices and highlight fundamental and ethical considerations when selecting test materials to use for assessment (Wright et al., 2017). For instance, there is now a greater call towards using tests with strong psychometric properties (Musewicz, Marczyk, Knauss, & York, 2009). Furthermore, Bernardo (2011) pointed out concerns in using psychological tests translated to Filipino as they may not be conceptually and structurally equivalent to the original versions of the scales. Some Filipino psychology researchers have therefore responded by conducting validation studies to look into the applicability of foreign tests in the Philippine contexts and equivalence of English and Filipino versions of the same test (Bernardo & Estrellado, 2014; Bernardo, Lising, & Shulruf, 2013; Ganotice, Bernardo, & King, 2012a; Nalipay, Bernardo, Tarroja, & Bautista, 2018). Piotrowski (1999) found that despite existing constraints, many Western practitioners still opted to use more traditional measures such as the Minnesota Multiphasic Personality Inventory (MMPI), Wechsler Intelligence Scales, the Symptom Checklist-90, Bender- Gestalt, and the Beck Depression Inventory, and projective techniques 86 Review of Psychological Assessment such as the Rorschach, Thematic Apperception Test, and sentence completion tests. In a survey of psychologists, traditional clinical assessment tools are more popularly used due to their perceived utility, such as the MMPI-2 and the Wechsler Intelligence Scales (Archer et al., 2006). The use of projective tests in assessment has received some harsh criticism in recent years, particularly due to the alleged poor psychometric properties of these techniques (Lilienfeld, Wood, & Garb, 2000). However, there is also much support coming from other researchers about the utility of projective techniques in assessment, and a call to continue doing validation studies on these techniques (Hibbard, 2013). The Process of Psychological Assessment The process of psychological assessment starts with the reason for evaluation or assessment, sometimes referred to as the referral question, and ends with the recommendations or the evaluation of assessment results. In between, practitioners select, administer, score, and interpret the assessment tools, and then integrate the findings (Groth-Marnat & Wright, 2016; Pawlik, Zhang, Vrignaud, Roussalov, & Fernandez-Ballesteros, 2000). Other studies have looked into other specific psychological assessment activities such as test feedback training, supervision, and practice (Curry & Hanson, 2010). The Present Study The conduct of psychological assessment and psychological interventions are among the main functions expected of psychologists (IPCP, 2016; The Psychology Act, 2010). This survey seeks to understand the practice of psychological assessment in the Philippines using Bornstein’s (2017) evidence-based psychological assessment (EBPA). In his unified framework, Bornstein (2017, p. 8) outlined the steps that operationalize EBPA: (1) Develop the knowledge, skills, and attitudes necessary for proficiency in psychological assessment. (2) Take steps to remain current regarding theoretical and empirical developments in other related areas (e.g., cognitive Tarroja, Alcala, Simon, & Sanchez 87 psychology, neuroscience), and integrate these knowledge into assessment practice. (3) Use empirically validated assessment tools that yield scores with documented clinical utility, and that fulfill criteria for universal test design. (4) Use test scores for outcomes and variables that they have been validated, and report accordingly. (5) Where possible, use multiple methods to assess a given construct, describe the rationale for selecting each tool, alone and in combination with other measures, with attention to incremental validity. (6) Enumerate meaningful test score convergences and divergences, with an interpretation of and explanation for each. Contextualize test score divergences. (7) Use self-monitoring throughout the assessment process to increase awareness of the impact of unstated assumptions, stereotypes, heuristics, and other sources of bias on assessment results. (8) Be aware of the synergistic interaction of patient and assessor identities throughout the assessment process. (9) Communicate test results to multiple stakeholders using language appropriate for the person receiving feedback. Using Bornstein’s EBPA, this study aims to describe the assessment practices of Filipino psychological assessment practitioners in terms of who engages in psychological assessment (their profile, competencies, professional training and development), the tools they use (psychological tests and instruments), how they conduct the assessment (process from the identification of the reason for the assessment to the communication of findings), and the stakeholders in the process (depending on the contexts). Figure 1 illustrates the elements of the survey based on the EBPA operationalization. METHOD Respondents There were 151 psychological assessment practitioners who 88 Review of Psychological Assessment Figure 1. Understanding psychological assessment practice using Bornstein’s (2017) evidence-based psychological assessment (EBPA). Tarroja, Alcala, Simon, & Sanchez 89 answered the survey. While majority are registered psychometricians, 33% are registered psychologists, and 7% are registered guidance counselors. The inclusion of registered guidance counselors in the sample is justified by how psychological assessment is conceptualized in this paper as a practice that captures both assessment and testing. Based on both the Guidance and Counseling Act (2004) and Psychology Act (2009), these licensed professionals are permitted to practice psychological assessment and psychological testing in the Philippines to different extents, depending on the scope and limits of their licenses. The respondents came from different parts of the country. Table 1 describes the demographic profile of the participants, including their length of practice. Although three respondents (2%) from the pool reported to have experienced practicing in other countries, they were practicing assessment in the Philippines at the time of data collection. With regard to the reason for assessment, most of the respondents conduct assessment for educational purposes (66.7%). A large portion of the respondents also conduct assessment for industrial/employment purposes (55.3%) and for psychiatric purposes (29.3%). Data Gathering Instrument The research team created a survey instrument composed of both close-ended and open-ended questions written in English. The survey had two versions: a paper-and-pen version and a digital version through Google Forms. The questionnaire was primarily guided by the process of psychological assessment as explained in EBPA. The first section of the survey collected data on the profile of the respondents (e.g., gender, age, professional license, practice setting, length of practice, etc.). This part also asked questions about the education, training and supervision received by the respondents. The next section asked about the context of where the respondents learned the administration, interpretation, and scoring of projective techniques, as well as their attitudes towards the use of these techniques. The subsequent section asked about the respondents’ professional practice (e.g., referral sources, type of clients, cases encountered), and included more specific questions pertaining to the context and 90 Review of Psychological Assessment process of their assessment practice. They were also asked about their assessment and non-assessment activities. This was followed by questions on test administration, interpretation and analysis of data, and write up of the psychological report. Next were questions about their knowledge of psychometric properties, standards scores, test bias, test development, and translation. The respondents were then asked to rate, on a scale of 1 to 5, how important they think assessment professionals should possess qualities of optimism, openness to experience, conscientiousness, extraversion, and agreeableness. While choices were provided for the test items in the aforementioned sections of the survey, an “Other” option was also offered, which allowed respondents to provide responses not included among the existing choices. This option to allow for free responses was especially necessary considering the wide array of specific tools that can be used by the respondents. The final section urged the respondents to share their experiences on the practice of assessment through a series of open-ended questions. The instrument was reviewed by three assessment practitioners to evaluate comprehensiveness of the content, wording, and response time. The tool was administered in a pretest with 30 participants. Results of the pretest were used to further improve the instrument (e.g., changing the order of some questions and adding more choices in some questions). Data Gathering and Analysis Procedure Participants for the online survey were recruited through a Google Forms link posted on social media platforms and sent through personal emails by the researchers to their colleagues who they knew to be assessment practitioners. Paper-and-pen surveys, on the other hand, were distributed to participants of workshops on psychological assessment. Before presenting the actual survey questions, respondents were presented with an informed consent form detailing the benefits, risks, and confidentiality issues with participating in the study. The survey consisted of 92 items, inclusive of four open-ended questions and took about 30 minutes to complete. The responses from Google Forms were combined with those manually-encoded in Microsoft Tarroja, Alcala, Simon, & Sanchez 91 Excel from the paper-and-pen surveys. The data was then analyzed using the Statistical Package for the Social Sciences (SPSS). Qualitative data was coded through thematic analysis by two raters. The raters first came up with the themes independently and then finalized the themes consensually. RESULTS The first section of the results is the quantitative description of the practice of psychological assessment in the Philippines based on Bornstein’s (2017) evidence-based psychological assessment (EBPA): the practitioner, tests, process, and stakeholders. The frequencies and percentages are presented to have a better understanding of assessment practice based on the self-reports of the 151 respondents. The second section is the qualitative description of the practice and discusses the themes that emerged from the open-ended questions on facilitating factors, challenges, and best practices of the practitioners. Description of the Practice: Quantitative Data Characteristics of respondents. The demographic profile of respondents include educational background, type of license held, and training received. Educational background. Majority of the respondents hold a Bachelor’s degree in Psychology (N = 72, 47.7%), 33% (N = 50) hold a Master’s degree in Psychology, 7.3% (N = 11) hold a doctoral degree, and 4.6% (N = 7) hold a Master’s degree in Guidance and Counseling. Sixty-one percent (N = 92) of the respondents graduated from a private university, and a great majority received training in psychological assessment during their practicum (N = 124, 82.1%). Licenses. Table 1 shows that majority of the respondents are registered psychometricians (N = 114, 75.5%). Some respondents were Registered Psychologists (N = 49, 32.5%) and some were Registered Guidance Counselors (N = 11, 7.3%). Still, 39 respondents (25.8%) indicated that they hold more than one professional license. Training. Table 2 shows that while many of the 151 respondents have taken courses in personality testing, projective techniques, 92 Review of Psychological Assessment Table 1. Demographic Profile of Respondents Frequency (N) Percent (%) Type of Licensure Registered Psychometricians 114 75.5 Registered Psychologists 49 32.5 Registered Guidance Counselors 11 7.3 Place of Practice National Capital Region (NCR) 84 55.6 Luzon 57 37.7 Visayas 7 4.6 Mindanao 7 4.6 Other 3 2.0 Gender Female 115 76.2 Male 36 23.8 Range Mean SD Age 21 - 68 33.37 10.52 Length of Practice (in years) 1 - 42 8.04 7.75 Tarroja, Alcala, Simon, & Sanchez 93 Table 2. Assessment Courses, Certificate Training, and Other Assessment Related Topics Taken Frequency (N) Percent (%) Assessment Courses Personality Tests 91 60.3 Psychological Measurement 84 55.6 Projective Techniques 83 55.0 Intelligence Testing 80 53.0 Test Construction 56 37.1 Rorschach 38 25.2 Other 7 4.6 Certificate Training Other 21 13.9 Rorschach 11 7.3 Minnesota Multiphasic Personality Inventory (MMPI) 10 6.6 Myers-Briggs Type Indicator (MBTI) 9 6.0 Other Assessment-Related Topics Administration of Tests 82 54.3 Issues in Assessment 71 47.0 Report Writing 69 45.7 Psychometric Properties of Tests 63 41.7 Test Development 50 33.1 Other 6 4.0 94 Review of Psychological Assessment psychological measurements, and intelligence testing, only a few have received training in specific tests such as the Rorschach (N = 11, 7.3%) and the Myers-Briggs Type Indicator (MBTI; N = 9, 6.0%). In addition, more than half of the respondents (N = 82, 54.3%) have also received training in test administration and other assessment-related topics. These topics include issues in assessment (N = 71, 47%), report writing (N = 69, 45.7%), and test development (N = 50, 33.1%). When it comes to projective techniques, most respondents reported that they received training in the administration, scoring, and interpretation of tools such as Human Figure Drawings, House- Tree-Person, and the Thematic Apperception Test (TAT). Among the highest percentages in these areas were for Human Figure Drawings, where 83.4% of participants reported being trained in its administration, 78.8% in its scoring, and 79.5% in its interpretation during their academic studies. On the other hand, results showed that fewer participants have taken a course in Rorschach (N = 38, 25.2%) or underwent certificate training for Rorschach (N = 11, 7.3%). Supervision. Most of the respondents noted that they received supervision during their practicum training. Of those respondents, 51.6% (N = 77) have indicated that their supervisor was a psychologist, while 16.7% (N = 25) were supervised by guidance counselors. Other respondents described themselves as independent practitioners (33%). These are practitioners who did not receive supervision from a psychologist/psychometrician. Meanwhile, 14 respondents (9.27%) identified themselves as the supervisors. Results suggest that there is no standard nor minimum number of hours for supervision and may depend on the practicum site or supervisor. While 16% of practitioners received between two to eight hours of supervision, 17.3% received an average of less than two hours of supervision per week. The most common types of supervision involved face-to-face feedback about the administration and scoring of the instruments used (40%), face-to-face feedback about the report (36.7%), and case conferencing (34%). For independent practitioners and supervisors, 43.3% received face-to-face feedback about the mechanics of writing the report, 42.7% received face-to-face feedback about the administration and scoring of the instruments used, and 41.3% received face-to-face feedback about the report’s content and analysis. Tarroja, Alcala, Simon, & Sanchez 95 Practice contexts or settings. A look at the different settings where psychological assessment is being conducted revealed that the respondents were not limited to practicing in one context. Table 3 shows the various contexts where respondents practiced. Type of clients. As shown in Table 4, most of the clients seen for assessment by the respondents are young adults, followed by adolescents, and adults. The age group that are least commonly seen for assessment are children in the early childhood stage. The practitioners surveyed see a relatively equal percentage of male and female clients. Most of the clients seen for assessment have no clinical conditions; when clients are seen for clinical conditions, most have behavioral problems, depressive disorders, and anxiety disorders. Respondents noted that most referrals come from guidance counselors (N = 84, 56%), followed by teachers/tutors (N = 74, 49.3%), and self-referrals (N = 72, 48%). Tools practitioners use. This includes test selection and instruments used in the test battery. Table 3. Setting of Practice Frequency (N) Percent (%) School 86 57.0 Private clinic (sole proprietorship/ employee) 62 41.1 Government setting 34 22.5 Shelters and similar facilities 32 21.2 Religious institution 32 21.2 Industrial setting/OFW clinic 26 17.2 Hospital setting (psychiatric/ non-psychiatric setting) 5 3.3 Rehabilitation center 5 3.3 Other 4 2.6 96 Review of Psychological Assessment Table 4. Age Groups of Assessment Clients Clients Frequency (N) Percent (%) Young adults (19-30 years old) 112 74.7 Adolescents (13-18 years old) 102 68 Adults (31-60 years old) 94 62.7 School-aged children (6-12 years old) 68 45.3 Elderly (61 years and above) 39 26 Toddlers (2-5 years old) 33 22 Table 5. Tests for Cognitive Ability Used in Assessment Cognitive Ability Test Frequency (N) Percent (%) Raven’s Standard Progressive Matrices 74 48.7 Purdue Non-language Test 43 28.7 Otis Lennon Mental Ability Test (OLMAT) 41 27.3 Weschler Adult Intelligence Scale – Fourth Edition (WAIS-IV) 32 21.3 Weschler Adult Intelligence Scale (WAIS) 28 18.7 Weschler Adult Intelligence Scale - Revised (WAIS-R) 25 16.7 Stanford-Binet Intelligence Scale – Fifth Edition (SB5) 25 16.7 Weschler Adult Intelligence Scale – Third Edition (WAIS-III) 19 12.7 Weschler Intelligence Scale for Children – Fourth Edition (WISC-IV) 19 12.7 Tarroja, Alcala, Simon, & Sanchez 97 Test selection. While psychometric properties, familiarity, and referral questions are considered, respondents reported that the availability of the test (N = 132, 87.3%) is the factor that most determines the test to be used. Instruments used in the test battery. Among tests that measure behaviors, the ADHD Checklist is most utilized (N = 44, 29.3%,), followed by the Child Behavior Checklist (N = 41; 27.3%). Among tests measuring adaptive functioning, the Vineland Adaptive Behavior Scales is used the most (N = 14, 9.3%,). For locally developed tests, it is the Panukat ng Pagkataong Pilipino or PPP (N = 27, 18%) that is most cited. Table 5 displays the commonly used tests for cognitive ability in assessment, with pen-and-paper tests that can be administered to groups topping the list. Only a very small percentage of the respondents utilize other methods of assessment such as online tests (N = 6, 4%). Table 6 presents the commonly used tests for personality. Activities engaged in by assessment practitioners. The respondents are engaged in various activities, with the most frequent ones being writing a psychological assessment report, administering tests, and scoring and interpreting particular tests. Majority of the respondents also engage in interviewing. With regard to non-assessment activities, a little more than half of the respondents (N = 89, 59.3%) are involved in administrative work, and about half of them (N = 81, 54%,) also engage in research. A third of the respondents (N = 50) put in less than 8 hours in a week for assessment-related work, 20% (N = 30) put in 8 to 16 hours, while 12.7% (N = 19) put in 16 to 24 hours. Test administration usually takes half a day for 36% (N = 54) of the respondents. Psychological assessment process. The psychological assessment process includes informed consent, intake interview, other sources of information in assessment, standard testing procedures, and report writing and giving of feedback. Informed consent. Thirty-eight percent (N = 57) often/ always ask for written consent, 16.7% (N = 25) often/always ask for oral consent, 14% (N = 21) sometimes ask for written consent and sometimes for oral consent, and 24% (N = 36) do not ask for consent. The written informed consent usually includes issues pertaining 98 Review of Psychological Assessment Table 6. Tests for Personality Used in Assessment Personality Test Frequency (N) Percent (%) Sentence Completion 99 65.3 Draw A Person 91 60.0 16 Personality Factor Questionnaire (16PF) 85 56.0 House Tree Person (HTP) 74 49.3 Thematic Apperception Tests (TAT) 62 41.3 Family Drawings 47 31.3 Myers Briggs Type Indicator (MBTI) 45 30.0 NEO Personality Inventory-Revised (NEO PI-R) 43 28.7 Minnesota Multiphasic Personality Inventory (MMPI/MMPI-2/MMPI-A) 37 24.7 Millon Clinical Multiaxial Inventory (MCMI) 17 11.3 to confidentiality, information regarding who gets a copy of the report, and extent of legal liability. When asking for oral consent, practitioners often cover the same issues. Intake interview. Sixty-three percent (N = 95) of the respondents do an intake interview before assessment. Fifty-four percent (N = 82) interview only the adult himself/herself, and 46.7% (N = 71) interview the adult client and another informant (e.g., parent, spouse/partner, employer). For child clients, 55.3% (N = 84) interview parents/guardians, 38.7% (N = 58) interview teachers, 15.3% (N = 23) interview therapists, and 36% (N = 54) interview child clients. Most of the interviews conducted are semi-structured in form. Other sources of information in assessment. Behavioral observations are usually obtained from the assessment session itself (N = 139, 92%), and some from school observations (N = 50, 33.3%). Tarroja, Alcala, Simon, & Sanchez 99 Collateral information sometimes comes from school records (N = 90, 59.3%) and previous reports on the client (N = 90, 59.3%). On standard testing procedures. While a huge portion of respondents claim to strictly follow standard testing procedures (N = 74, 48.7%), majority of the practitioners (N = 116, 76.7%) admitted to deviating from standard testing procedures. These deviations include translating instructions in Filipino or in the local language/dialect (N = 87, 57.3%), translating items/questions in Filipino or in the language/dialect (N = 60, 40%,), and explaining the instructions or items beyond what is written in the manual (N = 59, 39.3%). The most frequently cited reasons for these deviations is to ensure that the client understands the instructions, the questions, or the items, as well as to make items culturally appropriate or sensitive. In the interpretation and analysis of data, the biggest weight is given to psychological tests, followed by behavioral observation, clinical interview, and clinical judgment. Report writing and giving of feedback. For those who write screening reports, 40% (N = 60) come up with screening reports that are less than 5 pages in length. Reports most often contain identifying information on the client, sources of data, behavioral observations, and summary/conclusions. The person who referred the child client mostly gets access to the report (N = 94, 62%). For adult clients, it is usually the clients themselves who get access (N = 87, 57.3%). The parents/guardians of minor clients are also given access to the report (N = 67, 44.7%). Oral feedback is given by 64.75% (N = 98) of the respondents. For children, feedback is given to the parents/guardians (N = 15, 10%), to the teachers/counselors (N = 11, 7.3%), and to the referring parties (N = 22, 14.7%). For respondents seeing adult clients, 15.3% (N = 23) give feedback to the clients themselves. Challenges, Best Practices, and Facilitating Factors: Qualitative Data The qualitative data are organized in terms of the themes that emerged on the three major clusters: challenges, best practices, and facilitating factors. Each theme is described and illustrated by 100 Review of Psychological Assessment exemplars and direct quotes from the participants. Challenges. The themes under challenges covered points 3 to 9 of EBPA, from using empirically validated tools to communication of assessment findings. Availability and cost of test materials. Common challenges encountered by respondents were the availability and cost of test materials. Practitioners reported that the sites where they worked did not buy needed test materials because these were expensive. Thus, although many asserted that they used original test forms, there were those who admitted to having used photocopied or reproduced materials, which were also outdated. This response is quite typical for those working in the Human Resources department of their companies. Reasons for not investing in good test materials included: “employer has a tight budget,” “no support from the employer in terms of ethics in my profession,” and “our company does not give importance to testing materials.” For the respondents, these reasons also translated to a basic lack of respect for their profession. One respondent noted that, “income [of the company] is much more important than my license and professional integrity.” Test appropriateness. Another challenge encountered by majority of the respondents was the suitability of tests used for their setting and clients, which is related to the aforementioned lack of materials. There were some who found it challenging to find appropriate tests for very specific populations, such as for elderly with dementia. Thus, this highlights the importance of the availability of a range of tests at one’s disposal to respond to various population groups that practitioners served. Applicability in the Philippine setting. A third challenge with test materials relates to their applicability in the local setting. Most tests being used in the field today are purchased from abroad. Test instructions and items used English and may also use certain expression and phrases that are not commonly understood in local parlance and this could affect clients’ ability to show their true capacities: When my client did not understand the question, the test result was affected. Because of this, my observation toward the client and the test result contradicts each other. This may be due to the Tarroja, Alcala, Simon, & Sanchez 101 fact that most of our tests were developed by Westerners. Testing environment. Another challenge frequently encountered by respondents was the physical environment where testing itself was conducted. They reported environmental conditions not conducive for testing because of noise and uncomfortable high temperature in the room with only an electric fan serving as poor ventilation. Another practitioner noted the lack of available rooms: “(I) previously worked in the government and there are no permanent testing areas, we administer on (sic) vacant rooms.” Task demands. Practitioners experienced having a lot of clients and insufficient time to accomplish the work to be undertaken. The time it took to accomplish various tasks of assessment was a factor especially for tests requiring “laborious scoring and interpretation.” Similarly, coming up with well-written reports in a short period of time presented a strong challenge for many practitioners. For one, there is the need to ensure that the report reconciles all the data that has been gathered and making sure the findings are useful to the client. However, this kind of rigor also takes time and practitioners feel rushed especially when they are asked to turn in their reports as soon as possible. Additionally, the dearth of supervisors increases the conflict between quality and timeliness, especially in certain settings. Attitude of clients. Difficulties pertaining to client attitude were frequently encountered during the assessment session itself and during report feedback. First, many practitioners reported that clients may not take the session seriously which can manifest as the client being “unruly or noisy” or “being uncooperative and defensive.” Likewise, particularly in the HR setting where results of assessment have the consequence of a client being hired or not, some practitioners noted that it was possible that the applicants (clients) try to put their best foot forward which makes finding the truth challenging. In the HR setting it’s frustrating to encounter applicants who … answer only for desirability or (with) dishonesty; not expressing who they really are. Report writing. Respondents admitted that it was challenging to write reports that will answer the reason for referrals. Part of the 102 Review of Psychological Assessment difficulty may also be deciding what to include in reports especially if, as one respondent surmised, “the examinee did not take the test seriously.” Thus, this concern speaks to gauging the veracity of findings as well. Respondents also felt pressure when clients insisted on the urgency of reports beyond their capacity to deliver. Communicating findings. Practitioners encountered obstacles in communicating assessment results from clients who were unable to understand or who refused to accept the results. For the former, using simpler language or being more careful in explaining report findings were solutions. The latter concern was considered more complex since the practitioner was faced with the dilemma of having to defend their findings and recommendations. Best practices. Despite challenges in testing conditions and expected outputs that practitioners encountered, there are best practices that many recognized and tried to use in their practice. Many were aware of ethical principles governing assessment. They also appreciated various aspects of the process which included knowing the reason for referral, having the means to integrate test data, collaborating with clients, and having facilitative tools such as technology to improve their work. The themes under best practices covered points 1 to 4 of EBPA, from proficient use of psychological assessment to using tools to validate findings. Supervision. Being supervised by psychologists who are considered to be experts is considered to be most helpful. Aside from profiting from the expertise of the supervisor, practitioners also appreciated the ways their supervisor allowed them some level of autonomy so they could explore how to interact with clients. In addition, the practitioners exerted due diligence in modeling how their supervisor wrote reports, and learned about tests through their own reading and by asking for help. These indicate that respondents who engaged in self-directed learning found themselves more effective in doing assessments. Being knowledgeable about how to administer, score, and interpret tests accurately also increased the practitioners’ sense of efficacy. Peer supervision. Peer supervision transpires when respondents consult a colleague with the same depth and level of Tarroja, Alcala, Simon, & Sanchez 103 experience for concerns they have related to assessment. There may be more opportunities to do this in some settings, particularly where there is more than one practitioner with more or less equal competencies. In this case, supervision entailed exchanging experiences which proved helpful to their colleagues needing the assistance. Using a battery of tests. Respondents recognized that one best practice in assessment is using multiple sources of information to answer questions about their clients, which could offset challenges mentioned on test usage such as limited availability of tests and use of Western tests. Apart from the use of informant interviews and behavior observations to confirm findings from test data, optimal combination of tests was also seen to be quite beneficial. Facilitating factors. Facilitating factors refer to the professional contexts of the practitioners which enable them to observe high standards of psychological assessment practice. These include ethical practice, adherence to standard procedures, collaborative work, and use of technology in assessment. Ethical practice. It appears that general ethical principles of competent caring for the well-being of persons, integrity, as well as professional and scientific responsibilities to society are upheld and considered to be helpful for our respondents in their work. Most of them said that they relied on the instructions and guidelines in the test manuals. This showed their awareness and respect for the rigor with which these tests were created and also demonstrated their competence in carrying out their tasks. Additionally, many respondents also cited specific internal qualities and behaviors that they have which helped them do their work well. These included personal habits such as “time management, cautiously reviewing manuals and previous notes, or having a system of doing things,” as well as internally motivated behaviors such as “personal discipline, focus, patience, and the conscience in performing my sworn profession ethically.” Likewise, the respondents also recognized that they will not be able to observe ethical practices without having all the necessary tools at their disposal. Thus, frequently cited among the factors that helped them do their task well and practice ethically was having complete materials that were in their original form (i.e., not using photocopied forms) and having tests with good psychometric properties (e.g. reliability). Using other sources 104 Review of Psychological Assessment of information such as interview data also enhanced their ability to triangulate findings more efficiently and effectively. Adherence to standard procedures. For many respondents, the knowledge that there was a structure that was followed in assessment gave them confidence that they were being fair towards their clients. Respondents considered adhering to standard testing procedures such as following test manual instructions, to be a good practice. Another is conducting the whole process themselves such that they were in full control of the testing situation. Collaborative assessment process. Respondents understood that efficient and effective assessment involved using different data sources to confirm findings from the test. This included recognizing the important role of the clients themselves, not just as passive recipients of the test information, but also as active participants in the assessment process. Thus, findings were also relayed to them, and information was gathered from them such as through the interview. One respondent cited how including the clients in the process distributed the responsibility for the assessment outcome equitably: One of the best practices I have observed and experienced is having discussions with our clients regarding the results of the assessment tools and the assessment tools themselves. This is good because it shows how concerned both parties are with the assessment process. Use of technology. This aspect of the assessment process dealt more with ease and convenience. Respondents acknowledged that using technology made the whole process more efficient and also enabled them to deal with large amounts of data. Technology included use of online tests and online scoring programs. Some tests also generated result printouts that not only included standard scores but also narratives that automatically explained what the scores meant. This was especially helpful in settings such as schools or industries where group or large-scale testing was being done. DISCUSSION The quantitative and qualitative results provide a preliminary Tarroja, Alcala, Simon, & Sanchez 105 overview of the practice of psychological assessment. The authors highlight how the current practices may not be consistently aligned with the EBPA elements of Bornstein (2017). Misalignments or gaps are elaborated to emphasize the need for an evidence-based practice in assessment, not only to meet international standards, but also to improve the current practices and to better serve various stakeholders. The findings yielded some empirical data that can be used as bases for identifying key issues in the practice of psychological assessment in the Philippines. The nine steps of Bornstein will be elucidated from the four areas of focus in the paper, namely the practitioners, the tools, the process, and the stakeholders. It must be noted that the results are based on responses of assessment practitioners who responded to the survey and as such, are not completely representative of all assessment practitioners in the country. Most of the respondents are Registered Psychometricians who seem to have limited assessment experience. Current statistics show that the number of Registered Psychometricians is significantly higher than Registered Psychologists. With these points, this preliminary overview may be reflective of the contexts, experience, and views of the relatively younger assessment practitioners. Assessment Practitioners The first two steps in Bornstein’s framework relate to the practitioners, particularly developing their skills and knowledge to be proficient in assessment and keeping updated with current ethical practice. The profile of the practitioners respondents, with a large percentage composed of psychometricians and a smaller percentage of psychologists, mirror the proportion of assessment practitioners in the Philippines. As per Professional Regulation Commission (PRC) data, there are more than 3,000 registered psychologists and 19,000 registered psychometricians in the Philippines (R. Resurreccion, personal communication, June 3, 2020). As of 2017, there are 3,220 registered guidance counselors (Senate of the Philippines, 18th Congress, 2019). These disproportionate numbers show that there are a limited number of practitioners who can perform psychological assessment as the law requires registered psychometricians to be 106 Review of Psychological Assessment supervised by registered psychologists. EBPA highlights the importance of developing proficiency in psychological assessment in terms of knowledge, skills, and attitudes. In the Philippines, assessment practitioners are expected to adhere to the provisions of the Psychology Act (2009) in their practice of assessment. For registered psychometricians, proficiency in psychological assessment is not an adequate requirement to conduct psychological assessment. There are limited assessment tasks and activities that registered psychometricians can do. This limitation appears to pose a quandary to some practitioners especially in settings where there is a need to conduct group assessment and come up with quality reports in a short period of time, as there are not enough supervisors to oversee their work in a timely manner. as assessment practitioners who are registered psychologists become proficient in what they do, some engage less in assessment practice and engage more in psychotherapy. As mentioned in previous literature (Bekhit, Thomas, Lalonde, & Jolley, 2002; Meyer et al., 2001), experienced and seasoned psychologists likely spend more time doing therapy work, consultation, and other non-assessment related activities. The issue on psychological assessment proficiency also relates to the training of the practitioners. Many of the practitioners surveyed cater to more than one type of client, implying that they need to have both breadth and depth of knowledge in assessment practices as well as appropriate ways of serving various types of clientele. However, the undergraduate assessment training in schools appears to prepare them for a more generalized practice of assessment that is not context-specific as reflected in the course offerings mandated by the Commission on Higher Education (CMO 34. 2017). The graduate program training in assessment, on the other hand, appears to be more focused on clinical assessment. Given the limited experience of most of the assessment practitioners surveyed, there is a strong clamor for continuous training and supervision. This is an important finding as it shows that assessment is still considered to be a valuable endeavor that practitioners want to continue to educate themselves about. These behaviors adhere to some of the elements mentioned by Bornstein (2017) as crucial for an evidence based assessment practice, specifically in terms of developing Tarroja, Alcala, Simon, & Sanchez 107 knowledge that is based both on theory and practice, as well as ongoing self-monitoring in the course of analysis of test data. Abroad, though there is some indication that the practice of psychological assessment has declined (Norcross & Karpiak, 2012), but training for it is still considered valuable in graduate level studies (Mihura, Roy, & Graceffo, 2016). This is likewise mirrored by findings in our current study. Since many of the practitioners surveyed are young and new to the practice, many value training and supervision. The latter needs to be standardized and advocated for in the current practice of assessment. The importance of continuous training in psychological assessment as highlighted in this survey and validated what Krishnamurthy et al. (2004) emphasized in their paper on the role of education and training on developing competencies in psychological assessment. They likewise mentioned the importance of training programs that incorporate technological advances and innovations into assessment measures, and aligning graduate training with the demand for the psychological assessment service. As seen in the findings, addressing the gap between demand (for assessment services) and capacity to deliver (by assessment practitioners) needs to be given focus. The second point in EBPA likewise calls for practitioners to be updated in the current theoretical and empirical developments in psychological assessment. A good starting point might be in the clarification of the definition of psychological assessment vis-a-vis psychological testing, as defined in international standards, and resolving the overlaps and distinguishing the assessment practice of registered psychologists, registered guidance counselors, and registered psychometricians. Psychological Tools Practitioners who responded to the survey also administer most of the tests that are cited in the literature, as seen in the high frequency of usage of some well-known individually administered tests, structured personality tests, and projective techniques. Most of the psychological tests taught in schools and used by the practitioners are Western tests. The applicability of many of these tests is seen as a challenge by the practitioners. Hence, there is an increased recognition of 108 Review of Psychological Assessment the need to create local tests or adapting tests in current usage and perhaps establishing local norms. This reflects the awareness of our practitioners of the limitations of the tests they are currently using. In EBPA, sensitivity to the culture and preferences of stakeholders (e.g., patient, referring party) is considered one of the proficiencies expected of practitioners. There have been efforts by various Filipino psychologists to create local/indigenous tests. Nonetheless, these tests may not be as routinely used like the 16PF or the NEO PI. As mentioned by Bernardo (2011), it is important to look into the equivalence of translated and adapted versions with the original tests. Psychological Assessment Process The assessment practitioners use multiple methods in the conduct of psychological assessment in order to answer the reason for referral. While many have been trained in the administration and scoring of different psychological tests, they acknowledged that the most challenging part of the process is the interpretation of tests and integration of the different findings. Many also recognize the need not only for continuing professional development but also the need for supervision. Supervision is especially important as younger practitioners learn the process of assessment because of potentially egregious errors they may commit. As seen in the results, some respondents admitted that during test administration, they explained test items to their clients beyond what was allowed in the test manuals, as a form of accommodation. However, this practice is not within the purview of what is allowable in standard test administration nor in recommended accommodations. Having a supervisor to oversee the rigor and integrity with which assessment procedures have been followed is beneficial. This being said, it is also important for Filipino psychologists to recognize the urgency of creating local and valid tests; having more of these readily available in assessment practice obviates the need for on- the-spot translation or explanation of items for examinees. Another red flag noted in the process of assessment was the small proportion of practitioners who did not ask their examinees to go through an informed consent process (36 respondents). Though the context of Tarroja, Alcala, Simon, & Sanchez 109 why they did not go through this crucial process was not asked in this current study, future studies may dig deeper into the contexts where informed consent was waived or circumvented. Some practitioners may not be fully aware of the basic ethical principles of professional responsibility such as fidelity to basic test administration guidelines and obtaining informed consent. Future studies may explore the reasons that practitioners deviate from these professional norms. In general, for the majority of the new assessment practitioners, their ethical conduct of assessment shows promise for the ongoing development of assessment practice in the Philippines. In the qualitative data, most respondents were quite clear about adhering to manuals and standard testing procedures. Moreover, triangulating data not just from tests but from other assessment sources (e.g., interview data or observation of the client) were also deemed important by respondents. This implies that many of the practitioners are aware that assessment is not merely the administration of tests, but entails a complex and multi-layered process. These practices are again mirrored in the evidence-based assessment model of Bornstein (2017) which includes practitioner competence in utilizing different methods to assess a construct and understanding how different factors may affect assessment performance. The many respondents are also aware of ethical principles such as competent caring for persons and professional and scientific responsibility, also shows that our practitioners reflect about their work. Assessment is not simply a transaction where one evaluates clients and makes decisions about them. While this is part of the expectation of end users, assessment is also recognized by respondents as a relationship among individuals, and thus, care is also placed in choosing tests and the testing environment, or coming up with a good narrative that answers the reason for referral. The last step in the process of assessment is the communication of the assessment results to different stakeholders. The practitioners surveyed in this study reported that they come up with varied types of reports depending on the contexts and their end users such as medical professionals, parents, teachers and others. This indicates flexibility and shows that practitioners try to be responsive to the needs of 110 Review of Psychological Assessment stakeholders. It is nevertheless important to exercise care when crafting assessment plans suited to client needs. In this last stage when the clinician relays findings of the assessment, crucial information is imparted that could potentially be life changing for clients. Ultimately, when practitioners engage in the process of assessment, they should strive to strike a balance between adhering to ethical and professional standards and meeting needs of clients in a timely and relevant manner. This is the essence of evidence-based practice. Through the different responses of practitioners who took part in the survey, there seems to be an understanding that assessment is not merely an endeavor done on clients by technicians who administer tests and follow manuals and guidelines. It is essentially a professional relationship, one that is based on mutual trust, respect, and regard; and where standards and expectations are clear. Psychological assessment is a professional transaction where stakeholders know their respective roles and are engaged in it because it is an essential tool in decision- making and intervention. Limitations and Recommendations for Future Research The sample is not representative of all the assessment practitioners in the Philippines. Many of the respondents came from NCR, reflecting the limited reach of the survey rather than the state of the practice. It is likely that the way the survey was administered (i.e., online and through paper-and-pencil survey) made it more accessible to people living in more urbanized regions. The results are also limited by the type of respondents surveyed who are generally young and with few years of training and experience in the practice of assessment. The demographic profile indicates that only a very small portion of the respondents had doctorate degrees in Psychology. It is therefore uncertain to what extent the demographic characteristics of the respondents, and by implication, the results, reflects the actual state of the practice of psychological assessment in the Philippines. The authors acknowledge that not everyone who conducts psychological assessment is a psychologist by license or profession (e.g., supervised psychometricians, guidance counselors), and this is clearly reflected in the profile of the respondents. The authors also Tarroja, Alcala, Simon, & Sanchez 111 recognize that some psychologists may find this problematic based on their own interpretation of the Guidance Counseling Act (2004) and Psychology Act (2010), and so further investigation is encouraged that would clarify this perceived gap in what the law stipulates and what is seen in actual practice. Further research can be conducted on the development of a survey that is more comprehensive and inclusive and looks into the different contexts of assessment and different types of professionals. Professional psychologists are likely to benefit from continuous research on assessment training, supervision, and practice. Researchers can make significant contributions in the practice of psychology when they endeavor to study more specific areas of psychological assessment. Possibilities that can be explored are the translation and validation of standardized Western psychological tests, and the development of local tests that are socio-culturally sensitive to the needs and characteristics of Filipinos. Conclusion Psychological assessment is an essential part of the practice of psychology in the Philippines. It plays an essential role in problem identification, intervention, progress monitoring, and evaluation of people in various settings, and the data obtained from the process is meant to improve Filipinos’ way of life. Because of its wide ranging impact, practitioners must constantly reflect on and revisit the way they do their practice. Keeping in mind the evidence- based assessment practices mentioned in this study is the first step towards doing this. The critical practice of psychological assessment means constantly evaluating the tools and processes of the practice and strengthening standards and best practices for various purposes and contexts. This mindful practice includes the challenge of creating training programs (in schools and in practice) that are specific to the needs of various contexts where practitioners may find themselves. Likewise, while practitioners adhere to the general process of psychological assessment, they also try to adapt to the different contexts (reason for assessment, cultural context, person contexts), levels of training and supervision, and available resources. These adaptations and modifications in the 112 Review of Psychological Assessment process may need to be reviewed in relation to the definition of EBPA. Cognizant of the predominance of Western tests both in training and actual assessment, practitioners must thus take every possible step to ensure that tools used have good psychometric properties and are appropriate for the contexts in which they will be used. Thus, this paper highlights the need for supervision and developing local tests that can be used in practice. ACKNOWLEDGMENTS The authors wish to acknowledge the financial support given by the University Research Council Office (URCO) of the De La Salle University-Manila in the implementation and completion of this project. REFERENCES American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: AERA. Archer, R., Buffington-Vollum, J., Stredny, R., & Handel, R. (2006). A survey of psychologist test use patterns among forensic psychologists. Journal of Personality Assessment, 87(1), 84-94. Bekhit, N. S., Thomas, G. V., Lalonde, S., & Jolley, R. (2002). Psychological assessment in clinical practice in Britain. Clinical Psychology and Psychotherapy, 9, 285-291. Bernardo, A. B. I. (2011). Lost in translation? Challenges in using psychological tests in the Philippines. Silliman Journal, 52(1), 19- 41. Bernardo, A. B. I., & Estrellado, A. F. (2014). Measuring hope in the Philippines: Validating the short version of the Locus-of-Hope Scale in Filipino. Social Indicators Research, 119, 1649-1661. doi:10.1007/s11205-013-0573-7 Bernardo, A. B. I., Lising, R. L. S., & Shulruf, B. (2013). Validity of two Tarroja, Alcala, Simon, & Sanchez 113 language versions of the Auckland Individualism and Collectivism Scale with Filipino-English bilinguals. Psychological Studies, 58, 33-37. doi:10.1007/s12646-012-0172-8. Bornstein, R. F. (2017). Evidence-based psychological assessment. Journal of Personality Assessment, 99(4), 435-445. doi:10.1080 /00223891.2016.1236343 Commission on Higher Education (CHED) Memorandum Order (CMO) Number 34 (2017). Policies and standards for undergraduate programs in psychology. https://ched.gov.ph/wp-content/ uploads/2017/10/CMO-34-s-2017.pdf Cohen, R. J., & Swerdik, M. E. (2009). Psychological testing and assessment: An introduction to tests and measurement (7th ed.). Columbus, OH: McGraw-Hill Co. Curry, K. T., & Hanson, W. E. (2010). National survey of psychologists’ test feedback training, supervision, and practice: A mixed methods study. Journal of Personality Assessment, 92(4), 327-336. doi:10. 1080/00223891.2010.48200 Ganotice, F. A., Bernardo, A. B. I., & King, R. B. (2012a). Adapting the Facilitating Conditions Questionnaire (FCQ) for bilingual Filipino adolescents: Validating English and Filipino versions. Child Indicators Research, 6, 237-256. doi:10.1007/s12187-012-9167-1 Groth-Marnat, G., & Wright, A. J. (2016). Handbook of psychological assessment (6th ed.). NY: John Wiley & Sons, Inc. Guidance and Counseling Act, Rep. Act No. 9258 (2004). Retrieved from https://lawphil.net/statutes/repacts/ra2004/ra_9258_2004. html Hibbard, S. (2013). A critique of Lilienfeld et al.’s (2000) “The Scientific Status of Projective Techniques”. Journal of Personality Assessment, 80(3), 260-271. The International Project on Competence in Psychology. (2016). International declaration on core competences in Professional Psychology. Retrieved from https://www.psykologforeningen. no/foreningen/english/ipcp Krishnamurthy, R., VandeCreek, L., Kaslow, N., Tazeau, Y., Miville, M., Kerns,... & Benton, S. (2004). Achieving competency in psychological assessment: Directions for education and training. Journal of Clinical Psychology, 60(7). 725-739. 114 Review of Psychological Assessment Lilienfeld, S., Wood, J., & Garb, H. (2000). The scientific status of projective techniques. Psychological Science in the Public Interest, 1(2), 27-66. Meteyard, J., & Gilmore, L. (2015). Psycho-educational assessment of specific learning disabilities: Views and practices of Australian psychologists and guidance counselors. Journal of Psychologists and Counsellors in Schools, 25(1), 1-12. Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R.,... & Reed, G.M. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56(2), 128-165. Mihura, J. L., Roy, M., & Graceffo, R. A. (2016). Psychological assessment training in clinical psychology doctoral programs. Journal of Personality Assessment, 99(2), 153-164. Musewicz, J., Marczyk, G., Knauss, L., & York, D. (2009). Current assessment practice, personality measurement, and Rorschach usage by psychologists. Journal of Personality Assessment, 91(5), 453-461. Nalipay, M. J. N., Bernardo, A. B. I., Tarroja, M. C. H., & Bautista, M. L. C. (2018). The factor structure of the English CBCL DSM- oriented scales in Filipino schoolchildren as reported by bilingual caregivers. International Journal of School & Educational Psychology, 7(1), 102-110. Nel, P., Pezzolesi, C., & Stott, D. (2012). How did we learn best? A retrospective survey of clinical psychology training in the United Kingdom. Journal of Clinical Psychology, 68(9), 1058-1073. Norcross, J. C., & Karpiak, C. P. (2012). Clinical psychologists in the 2010s: 50 years of the APA Division of Clinical Psychology. Clinical Psychology: Science and Practice, 19(1), 1-12. Pawlik, K., Zhang, H., Vrignaud, P., Roussalov, V., & Fernandez- Ballesteros, R. (2000). Psychological assessment and testing. In K. Pawlik & M. R. Rosenzweig (Eds.). International handbook of psychology (pp. 365-406). SAGE Publications Ltd. Philippine Psychology Act, Rep. Act No. 10029 (2009). Retrieved from https://lawphil.net/statutes/repacts/ra2010/ra_10029_2010. html Piotrowski, C. (1999). Assessment practices in the era of managed Tarroja, Alcala, Simon, & Sanchez 115 care: Current status and future directions. Journal of Clinical Psychology, 55(7), 787-796. Senate of the Philippines, 18th Congress. (2019, June 9). Gatchalian to DepEd: Hire more guidance counselors to deter bullying in schools [Press release]. Retrieved from https://www.senate.gov. ph/press_release/2019/0609_gatchalian1.asp#:~:text=The%20 Philippines%2C%20however%2C%20only%20has,of%20 licensure%20examinees%20in%202008 Sotelo-Dynega, M., & Dixon, S. (2014). Cognitive assessment practices: A survey of school psychologists. Psychology in the Schools, 51(10), 1031-1045. Tarroja, M. C., Catipon, M. A., Dey, M. L., & Garcia, W. C. (2013). Advocating for play therapy: A challenge for an empirically- based practice in the Philippines. International Journal of Play Therapy, 22(4), 207-218. Teh, L. (2003). A survey on the practice and status of psychotherapy in the Philippines. Philippine Journal of Psychology, 36(1), 112-133. Tuason, M. T., Fernandez, K., Catipon, M. A., Dey, L., & Carandang, M. L.. (2012). Counseling in the Philippines: Past, present, and future. Journal of Counseling & Development, 90, 373-377. Wright, C. V., Beattie, S. G., Galper, D. I., Church, A. S., Bufka, L. F., Brabender, V. M., & Smith, B. L. (2017). Assessment practices of professional psychologists: Results of a national survey. Professional Psychology: Research and Practice, 48(2), 73-78.