Thursday, November 28, 2019
Monday, November 25, 2019
Software essays
Software essays Structured approach- follows the software development cycle. It follows the whole program and then divides it into steps that can be systematically followed to conclude to a solution. Each step must be completed before the next step is about to start as correcting each step is vital. The steps which take place in the structured approach are: The Structured approach is mostly used for complex programs. The advantages of this implementation are it is thoroughly testing, it meets the requirements of users and uses a range of experts. However the disadvantages are it is costly, time consuming and it requires a wide range of different skills. Prototyping approach- involves building a working model that is evaluated by users. The model is then modified and evaluated more to service the solution even further. Prototyping would be used in the manufacturing and engineering to produce an early version of a product. There are 2 types of prototypes used: information gathering and evolutionary. Information gathering prototypes are developed to gather information that can be used in another program. Evolutionary prototypes become the full working program. The advantages of the prototyping approach would be it is relatively fast development, and it models a larger project which allows easier modification for the end product. Disadvantages would be that it may be difficult to implement it as a full working program. Rapid applications approach- or RAD is ant method of software design that uses tools to quickly generate a program for a user. It uses existing modules to create a solution by using CASE tools (computer aided software engineering) to assist in the development of the program. The advantages of RAD are fast development, it is relatively cheap and can reuse code. The disadvantages are ...
Thursday, November 21, 2019
Human Resource Development Assignment Example | Topics and Well Written Essays - 2500 words
Human Resource Development - Assignment Example nd instructions, which would enable these individuals to achieve high level of knowledge, competence and skills for carrying out their works in an effective manner. (Holton and Baldwin, 2003; Velada and Caetano, 2007). Training involves learning process. However, there lies a difference in training and learning process. The training program is a teacher focused program, whereas the learning process is learner focused. In case of learning, the ultimate goal remains production of a learning process. In case of training, the ultimate goal is training the staffs within an organization. In case of learning, the learner plays an active role, whereas in case of training, the learner plays a passive role. Training plays an important role within an organization. There are various types of training needs within an organization. The training helps in the analysis or assessment in a broader concept and plays a number of roles. Organizational needs- The organization needs training and development programs to educate and increase the knowledge of their employees, which in turn strengthen the organizational goals, strategies and objectives. The training program is sometimes suggested as the best solution of meeting the business problems. Personal needs- The potential participants would achieve experience, knowledge and learning. The training increases the knowledge, skills and ability of the individuals and enhances them in improving their individual performances thereby improving the performance of the overall organization. Performance needs- If the employees are not performing up to the desired or established standards then the training and development programs helps in improving their level of performances. This tries to reduce the performance gap of the employees in an... This paper stresses that the organization needs training and development programs to educate and increase the knowledge of their employees, which in turn strengthen the organizational goals, strategies and objectives. The training program is sometimes suggested as the best solution of meeting the business problems. Contemporary training initiatives aim at linking the employers of the organizations with the skill brokers where these skills brokers would be offering independent and impartial advice to the organizations and match the type of training needs with the best suitable training providers in order to provide the best training and development programs to the employees. This essay makes a conclusion that training is a systematic modification of attitude and behavior of any individual by means of implementation of various learning programs, instructions and events, which would be enabling these individuals to achieve increased level of knowledge, competence and skills for performing their functions in an effective manner. It is very important for an organization to implement an effective training and development program for training and increasing the skills, knowledge and ability of its employees. The impact of training and development programs on the reaction, learning, behavior and results reflect the success or failure of such programs. The UK government has played an important role in supporting the training and development programs in the organization.
Wednesday, November 20, 2019
Organizational Behavior on Henry Ford Essay Example | Topics and Well Written Essays - 750 words
Organizational Behavior on Henry Ford - Essay Example This plays on what Gilbreth, a famous industrial/organizational psychologist, called time-and-motion theory. This is a way in which Ford was able to produce automobiles at an accelerated rate by giving everyone a designed task and forming the assembly line. Originally, Ford designed a static assembly line, but as his time-and-motion theory developed, he employed the use of a conveyor system to make a moving assembly line, which increases production. This also brought into the field of engineering psychology to see how to design this work environment so it was safe for workers but maximized efficiency. This idea of production worked so well that it became fundamental during the world wars in order for the United States and other countries to produce military vehicles. In todayââ¬â¢s society, we now operate on the concept of the automated assembly line in which we program machines to create the products allowing for work to continue on the assembly line almost 24/7. Many companies h ave taken the original idea of the assembly line and applied it to their business (Batechelor, 1994). Another way in which Ford was able to maximize production which was through the concept of interchangeable parts that helped make the assembly line run efficiently. Prior to interchangeable parts, if something on the Model T was broken, an entirely new part would have to be created. The idea of creating multiple parts ensures that if a part breaks, there is an immediate identical part that can be used to replace it. This minimizes the skill level necessary to complete the repair decreases the amount of time required to accomplish this. This has made an impact now not only in the professional world but also as a general consumer behavior (Freeman & Soete, 2004). Ford was a believer in the American Dream. In this respect, he was always trying to make sure that he kept job satisfaction high in order to keep turnovers low.
Monday, November 18, 2019
The Concept of Bureaucracy as an Effective System of Organization Essay - 1
The Concept of Bureaucracy as an Effective System of Organization - Essay Example This research will begin with the statement that various descriptions and concepts have been developed regarding bureaucracy. From the definition, bureaucracy can be described as a management system intended to handle the affairs of the state and organize the relationship between the state and the citizens. Max Weber, a sociologist, described bureaucracy extensively, and his ideas are more or less acceptable. Some of his works include the Rational Efficient Organization. While political scientists describe bureaucracy as state administration, the economists use the term in describing the non-market organizations. à To some extent, most organizations have been bureaucratized. Our mechanistic thinking mode has shaped the basic concepts of what entails a good organization. Such thinking has played a major role in defining how an organization defines its responsibilities and accountability involved. According to theorists, institutions, and organizations, bureaucracy can adversely affe ct strategies regarding the way through which they want to achieve their objectives. However, at times, those organizations and institutions may tend to disagree on how to shape and reshape their interests and goals. From an organizational perspective, institutions and organizations can easily endow the individual actors with interests and goals on condition that some specific features of an organization remain in place. Bureaucracy gives bosses control over their subordinates and subordinates should, in turn, follow the instructions. As a result, subordination and control form the major section of a bureaucratic system to form the organizing principles guiding decisions, directing actions and determining the outcomes. Each employee should follow the instructions given to them by their seniors. Research shows that bureaucratic practices create in peoplesââ¬â¢ mind lack of curiosity, making them function only within some limits based on set rules and regulations. The bureaucratic mind will, therefore, being in control, use the authority to control reformation ability of the system. In such a system, no person has the power of initiating any changes or proposing drastic changes that can disrupt prevailing peace and order.
Saturday, November 16, 2019
Reliability of Speaking Proficiency Tests
Reliability of Speaking Proficiency Tests Introduction Testing, as a part of English teaching, is a very important procedure, not just because it can be a valuable source of information about the effectiveness of learning and teaching but also because it can improve teaching, and arouse the students motivation to learn. Testing oral proficiency has become one of the most important issues in language testing since the role of speaking ability has become more central in language teaching with the advent of communicative language teaching (Nakamura, 1993). However, assessing speaking is challenging (Luoma, 2004). Validity and reliability, as fundamental concerns and essential measurement qualities of the speaking test (Bachman, 1990; Bachman Palmer, 1996; Alderson et al, 1995), have aroused widespread attention. The validation of the speaking test is an important area of research in language testing. Test of oral proficiency just started in China 15 years ago, and there are a few very dominant tests. An increasing number of Chinese linguists are putting their attention and efforts on analysis of their validity and reliability. Institutions began to introduce speaking tests into English exams in recent years with the widespread promotion of communicative language teaching (CLT). Publications that deal with speaking tests within institutions provide some qualitative assessments (Cai, 2002). But there is relatively little research literature relating to the reliability and validity of such measures within a university context. (Wen, 2001). The College English Department at Dalian Nationalities University (DLNU) has been selected as one of thirty-one institutions of the College English Reform Demonstration Project in the Peoples republic of China. In College English (CE) course of DLNU, the speaking test is one of the four subtests of the final examination of English assessment. The examination uses two different formats. One is a semi-direct speaking test, in which examinees talk to microphones connected to computers, and have their speeches recorded for the teachers to rate afterwards. The other is a face-to-face interview. This research in this paper aims to ascertain the degree of the reliability and validity of the speaking tests. By analyzing the results of the research, teachers will become more aware of the validity and reliability of oral assessments, including how to improve the reliability and validity of speaking tests. I, as a language teacher, will gain insight into the operation of language proficiency te st, In order to better degree of reliability and validity of a particular test, I will also take other qualities of test usefulness into account when designing the language proficiency test., such as practicality and authenticity. Research questions: This study mainly addresses the questions of validity and reliability of the speaking test administered at DLNU. They are comprehensive concepts that involve analysis of test tasks, administration, rating criteria, examinee and testers attitudes towards the test, the effect of the test on teaching and teacher or learner attitudes towards learning the tests (Luoma, 2004). Therefore, the purpose of this study is to answer the following research questions: 1. Is the speaking test administered at DLNU a valid and reliable test? This question can involve the following two sub-questions: 1) To what extent is the speaking test administered at DLNU reliable? 2) To what extent is the speaking test administered at DLNU valid? 2. In what aspects and to what extent may the validity and reliability of the speaking test administered at DLNU be improved? Literature Review This chapter presents a theoretical framework of speaking construct, ways of testing speaking, marking of speaking test and the reliability and validity of speaking test, also introduces the situation of speaking test in China. Analyzing Speaking And Speaking Test The Nature Of Speaking Speaking, as a social and situation-based activity, is an integral part of peoples daily lives (Luoma, 2004). Testing second language speaking is often claimed to be a much more difficult undertaking than testing other second language abilities, capacities or competencies, skillsà ¼Ãâ Underhill, 1987). Assessment is difficult not only because speaking is fleeting, temporal and ephemeral, but also because of the comprehensibility of pronunciation, the special nature of spoken grammar and spoken vocabulary, as well as the interactive and social features of speaking (Luoma, 2004), because of the ââ¬Å"unpredictability and dynamic natureâ⬠of language itself (Brown, 2003). To have a clear understanding of what it means to be able to speak a language, we must understand that the nature and characteristics of the spoken language differ from those of the written form (Luoma, 2004; McCarthy OKeefe, 2004; Bygate, 2001) in its grammar, syntax, lexis and discourse patterns due to the nature of spoken language. Spoken English involves reduced grammatical elements arranged into formulaic chunk expressions or utterances with less complex sentences than written texts. Spoken English breaks the standard word order because the omitted information can be restored from the instantaneous context (McCarthy OKeefe, 2004; Luoma, 2004; Bygate, 2001; Fulcher, 2003). Spoken English contains frequent use of the vernacular, interrogatives, tails, adjacency pairs, fillers and question tags which have been interpreted as dialogue facilitators (Luoma, 2004; Carter McCarthy, 1995). The speech also contains a fair number of slips and errors such as mispronounced words, mixed sounds, and wrong words due to inattention, which is often pardoned and allowed by native speakers (Luoma, 2004). Conversations are also negotiable, unpredictable, and susceptible to social and situational context in which the talks happen (Luoma, 2004). The Importance Of Speaking Test Testing oral proficiency has become one of the most important issues in language testing since the role of speaking ability has become more central in language teaching with the advent of CLA (Nakamura, 1993). Of the four language skills (listening, speaking, reading, writing), listening and reading occur in the receptive mode, while speaking and writing exist in the productive mode. Understanding and absorption of received information are foundational while expression and use of acquired information demonstrate an improvement and a more advanced test of knowledge. A lot of interests now in oral testing is partly because second language teaching is more than ever directed towards the speaking and listening skillsà ¼Ãâ Underhill, 1987). Language teachers are engaged in ââ¬Å"teaching a language through speakingâ⬠(Hughes, 2002:7). On one hand, spoken language is the focus of classroom activity. There are often other aims which the teacher might have: for instance, helping the student gain awareness of practice in some aspect of linguistic knowledge (ibid). On the other hand, speaking test, as a device for assessing the learners language proficiency also functions to motivate students and reinforce their learning of language. This represents what Bachman (1991) has called an ââ¬Å"interfaceâ⬠between second language acquisition (SLA) and language testing research. However, assessing speaking is challenging, ââ¬Å"because there are many factors that influence our impression of how well someone can speak a languageâ⬠(Luoma, 2004:1) as well as unpredictable or impromptu nature of the speaking interaction. The testing of speaking is difficult due to practical obstacles and theoretical challenges. Much attention has been given to how to perfect the assessment system of oral English and how to improve its validity and reliability. The communicative nature of the testing environment also remains to be considered (Hughes, 2002). The Construct Of Speaking Introduction To Communicative Language Ability (CLA) A clear and explicit definition of language ability is essential to language test development and use (Bachman,1990). The theory on which a language test is based determines which kind of language ability the test can measure, This type of validity is called construct validity. According to Bachman (1990:84), CLA can be described as ââ¬Å"consisting of both knowledge or competence and the capacity for implementing or executing that competence in appropriate, contextualized communicative language useâ⬠. CLA includes three components: language competence, strategic competence and pyschophysiological mechanisms. The following framework (figure 2.1) shows components of communicative language ability in communicative language use (Bachman,1990:85). Knowledge Structures Language Competence Knowledge of the world Knowledge Of Language Strategic Competence Psychophysiological Mechanisms Context Of Situation This framework has been widely accepted in the field of language testing. Bachman (1990:84) proposes that ââ¬Å"language competenceâ⬠essentially refers to a set of specific knowledge components that are utilized in communication via language. It comprises organizational and pragmatic competence. Two areas of organizational knowledge that Bachman (1990) distinguishes are grammatical knowledge and textual knowledge. Grammatical knowledge comprises vocabulary, syntax, phonology and graphology, and textual knowledge, comprises cohesion and rhetorical or conversational organization. Pragmatic competence shows how utterances or sentences and texts are related to the communicative goals of language users and to the features of the langue-use setting. It includes illocutionary actsà ¼Ã
âor language functions, and sociolinguistic competence, or the knowledge of the sociolinguistic conventions that govern appropriate language use in a particular culture and in varying situations in t hat culture (Bachman, 1987). Strategic competence refers to mastery of verbal and nonverbal strategies in facilitating communication and implementing the components of language competence. Strategic competence is demonstrated in contextualized communicative language use, such as socialcultural knowledge, real-world knowledge and mapping this onto the maximally efficient use of existing language abilities. Psychophysiological competence refers to the visual and auditory skill used to gain access to the information in the administrators instructions. Among other things, psychophysiological competence includes things like sound and light. Fulchers Construct Definition To know what to assess in a speaking test is a prime concern. Fulcher (1997b) points out that the construct of speaking proficiency is incomplete. Nevertheless, there have been various attempts to reflect the underlying construct of speaking ability and to develop theoretical frameworks for defining the speaking construct. Fulchers framework (figure 2.2) (Fulcher, 2003: 48) describes the speaking construct. As Fulcher (2003) points out that there are many factors that could be included in the definition of the construct: Phonology: the speaker must be able to articulate the words, have an understanding of the phonetic structure of the language at the level of the individual word, have an understanding of intonation, and create the physical sounds that carry meaning. Fluency and accuracy: these concepts are associated with automaticity of performance and the impact on the ability of the listener to understand. Accuracy refers to the correct use of grammatical rules, structure and vocabulary in speech. Fluency has to do with the ââ¬Ënormal speed of delivery to mobilise ones language knowledge in the service of communication at relatively normal speed. The quality of speech needs to be judged in terms of the gravity of the errors made or the distance from the target forms or sounds. Strategic competence: this is generally thought to refer to an ability to achieve ones communicative purpose through the deployment of a range of coping strategies. Strategic competence includes both achievement strategies and avoidance strategies. Achievement strategies contain overgeneralization/morphological creativity. Learners transfer knowledge of the language system onto lexical items that they do not know, for example, saying ââ¬Å"buyedâ⬠instead of ââ¬Å"boughtâ⬠, Speakers also learn approximation: learners replace an unknown word with one that is more general or they use exemplification, paraphrasing (use a synonym for the word needed), word coinage (invent a new word for an unknown word), restructuring (use different words to communicate the same message), cooperative strategies (ask for help from the listener) , code switching (take a word or phrase from the common language with the listener in order to be understood) and non-linguistic strategies (use gestur es or mime, or point to objects in the surroundings to help to communicate). Avoidance or reduction strategies consist of formal avoidance (avoiding using part of the language system) and functional avoidance (avoiding topical conversation). Strategic competence includes selecting communicative goals and planning and structuring oral production so as to fulfill them. Textual knowledge: competent oral interaction involves some knowledge of how to manage and structure discourse, for example, through appropriate turn-taking, opening and closing strategies, maintaining coherence in ones contributions and employing appropriate interactional routines such as adjacency pairs. Pragmatic and sociolinguistic knowledge: effective communication requires appropriateness and the knowledge of the rules of speaking. A range of speech acts, politeness and indirectness can be used to avoid causing offence. Ways Of Testing Speaking Clark (1979) puts forward a theoretical basis to discriminate three types of speaking tests: direct, semi-direct and indirect tests. Indirect tests belong to ââ¬Å"procommunicativeâ⬠era in language testing, in which the test takers are not actually required to speak. It has been regarded as having the least validity and reliability, while the other two formats are more widely used (OLoughlin, 2001). In this section, the characteristics, advantages and disadvantages of the direct and semi-direct test are presented, The Oral Proficiency Interview Format One of the earliest and most popular direct speaking test formats, and one that continues to exert a strong influence, is the oral proficiency interview (OPI) ââ¬âdeveloped originally by the FSI (Foreign Service Institute) in the United States in the 1950s and later adopted by other government agencies. It is conducted with individual test-taker by a trained interviewer, who assesses the candidate using a global band scale (OLoughlin, 2001). It typically begins with a warm-up discussion of a few easy questions, such as getting to know each other or talking about the days events. Then the main interaction contains the pre-planned tasks, such as describing or comparing pictures, narrating from a picture series, talking about a pre-announced or examiner-selected topic, or possibly a role-play task or a reverse interview where the examinee asks question of the interviewer (Luoma. 2004). An important example of this type of test is the speaking component of the International English L anguage Testing System (IELTS), which is adopted in 105 different countries around the world each year. The Advantage Of An Interview Format The oral interview was recognized as the most commonly used speaking test format. Fulcher (2003) suggests that it is partly because the questions used can be standardized, making comparison between test takers easier than when other task types are used. Using this method, the instructor can get a sense of the oral communicative competence of students and can overcome weakness of written exams, because the interview, unlike written exams, ââ¬Å"is flexible in that the questions can be adapted to each examinees performance, and thus the testers have more controls over what happens in the interactionâ⬠(Luoma, 2004:35). It is also relatively easy to train raters and obtain high inter-rater reliability (Fulcher, 2003). The Disadvantage Of An Interview Format However, concern and skepticism exist about whether it is possible to test other competencies or knowledge because of the nature of the discourse that the interview produces (van Lier, 1989). a. Issue of time For the instructor, time management can be quite an issue. For instance, using a two-hour period for exams for 20 students means each student is allowed only six minutes for testing. This includes the time needed to enter the room and adjust to the setting. With such a time limit the student and instructor can hardly have any kind of normal real-world conversation. b. Issue of asymmetrical relationship The asymmetrical relationship between examiners and candidates elicits a form of inauthentic and limited socio-cultural contexts (van Lier, 1989; Savignon, 1985; Yoffe, 1997). Yoffe (1997) commented on ACTFL (American Council on the Teaching of Foreign Languages) OPI that the tester and the test-taker are ââ¬Å"clearly not in equal positionsâ⬠(Yofee, 1997). The asymmetry is not specific to the OPI but is inherent in the notion of an interview as an exchange wherein one person solicits information in order to arrive at a decision while the interlocutor produces what he or she perceives as most valued. The interviewee is, in most cases, acutely aware of the ramifications of the OPI rating and is, consequently, under a great deal of stress. Van Lier (1989) also challenges the validity of OPI in terms of the asymmetry between them because ââ¬Å"the candidate speaks as to a superior and is unwilling to take the initiativeâ⬠(van Lier, 1989). Under the unequal relationship, the speech discourse, such as turn ââ¬âtaking, topic nomination and development, and repair strategies are all substantially different from normal conversational exchanges (see van Lier 1989). c. Issue of interviewer variation Given the fact that the interviewer has considerable power over the examinee in an interview, concerns have been aroused about the effect of the interlocutor (examiner) on the candidates oral performance. Different interviewers vary in their approaches and attitudes toward the interview. Brown (2003) warns the danger of such variation to fairness. OSullivan (2000) conducts an empirical study that indicated learners perform better when interviewed by a woman, regardless of the sex of the learner. Underhill (1987:31) expresses his concern on the unscripted ââ¬Å"flexibilityâ⬠¦ means that there will be a considerable divergence between what different learners say, which makes a test more difficult to assess with consistency and reliability.â⬠Testing Speaking In Pairs There has been a shift toward a paired speakers format: two assessors examine two candidates at a time. One assessor interacts with the two candidates and rates them on a global scale, while the other does not take part in the interaction and just assessesusing an analytic scale. The paired oral test has been used as part of large-scale, international, standardized oral proficiency tests since the late 1980s (Ildikà ³, 2001). Key English Test (KET), Preliminary English Test (PET), First Certificate in English (FCE) and Certificate in Advanced English (CAE) make use of the paired format. In a typical test, the interaction begins with a warm-up, in which the examinees introduce themselves to the interlocutor, followed by two pair interaction task. The talk may involves comparing two photographs by each candidate at first, such as in Cambridge First Certificate (Luoma, 2004), then a two-way collaborative task between the two candidates based on more photographs, artwork or computer gra phics, and ends up with a three-way discussion with the two examinees and the interlocutor about a general theme that is related to the earlier discussion. The advantages of the paired interview format Many researchers claim that the paired format is preferable to OPI. The reasons are: a. The changed role of the interviewer frees up the instructors in order to pay closer attention to the production of each candidate than if they are participants themselves (Luoma, 2004). b. The reduced asymmetry allows more varied interaction patterns, which elicits a broader sample of discourse and increased turn-takings than were possible in the highly asymmetrical traditional interview (Taylor, 2000). c. The task type based on pair-work will generate a positive washback effect on classroom teaching and learning (Ildiko, 2001). In the case of the instructor following Communicative Language Teaching (CLT) methodology, where pair work may take up a significant portion of a class, it would be appropriate to incorporate similar activities in the exam. In that way the exam itself is much better integrated into the fabric of the course. Students can be tested for performance related to activities done in class. There may also be benefits in regards to student motivation. If students are aware that they will be tested on activities similar to the ones done in class, they may have more incentive to be attentive and use class time effectively. The disadvantages of the paired interview format There are, however, also concerns voiced regarding the paired format. a. Mismatches between peer interactants The most frequently raised criticisms against the paired speaking test relate to various forms of mismatches between peer interactants (Fulcher, 2003). Ildiko (2001) points out that when a candidate has to work with an incomprehensible or uncomprehending peer partner, it may negatively influence the candidates performance. As a consequence, in such cases it is quite impossible to make a valid assessment of candidates abilities. b. Lack of familiarity between peer interactants The extent to which this testing format actually reduces the level of anxiety of test-takers compared to other test formats remains doubtful (Fulcher, 2003). OSullivan (2002) suggests that the spontaneous support offered by a friend positively reduces anxiety and task performance under experimental conditions. However, the chances are quite high that the examinee will meet with strangers as his or her peer interactant. It is hard to imagine how these strangers can carry out some naturally flowing conversations. Estrangement, misinterpretation and even breakdown may occur during their talk. c. Lack of control of the discussion Problems are generated if the examiner loses control of the oral task (Luoma, 2004). When the instructions and task materials are not clear enough to facilitate the discussion, the examinees conversation may go astray. Luoma (2004) points out that testers often feel uncertain about what amount of responsibility that they should give to the examinees. Furthermore, examinees do not know what kind of performance will earn them good results without the elicitation of the examiner. When one of the examinees has said too little, the examiner ought to monitor and jump in to give help when necessary. Semi-Direct Speaking Tests The term ââ¬Å"semi-directâ⬠is employed by Clark (1979:36) to describe those tests that are characterized ââ¬Å"by means of tape recordings, printed test booklets, or other ââ¬Ënon-human elicitation procedures, rather than through face-to-face conversation with a live interlocutor.â⬠Appearing during 1970s, and being an innovative adaptation of the traditional OPI, the semi-direct method normally follows the general structure of the OPI and makes an audio-recording of the test takers performance which is later rated by one or more trained assessors (Malone, 2000). Examples of the semi-direct type used in the U.S.A. are the simulated oral proficiency interviews (SOPI) and the Test of Spoken English 2000 (TSE) (Ferguson, 2009). Examples in U.K. include the Test in English for Education Purpose (TEEP) and the Oxford-ARELS Examinations (OLoughlin, 2001). Another mode of delivery is testing by telephone as in the PhonePass test (the test mainly consists of reading sentenc es aloud or repeating sentences), or even video-conferencing (Ferguson, 2009). The Advantages Of The Semi-Direct Test Type First, the semi-direct test is more cost efficient than direct tests, because many candidates can be tested simultaneously in large laboratories and administered by any teacher, language lab technician or aide in a language laboratory where the candidate hears taped questions and has their responses recorded (Malone, 2000). Second, the mode of testing is quite flexible. It provides a practical solution in situations where it is not possible to deliver a direct test (OLoughlin, 2001), and it can be adapted to the desired level of examinee proficiency and to specific examinee age groups, backgrounds, and professions (Malone, 2000). Third, semi-direct testing represents an attempt to standardize the assessment of speaking while retaining the communicative basis of the OPI (Shohamy, 1994). It offers the same quality of interview to all examinees, and all examinees respond to the same questions so as to remove the effect that the human interlocutor will have on the candidate (Malone, 2000). The uniformity of the elicitation procedure greatly increases the reliability of the test. Some empirical studies (Stansfield, 1991) show high correlations (0. 89- 0. 95) between the direct and semi-direct tests, indicating the two formats can measure the same language abilities and the SOPI can be the equivalent and surrogate of the OPI. However, there are also disadvantages. The Disadvantages Of The Semi-Direct Test Type First, the speaking task in semi-direct oral test is less realistic and more artificial than OPI (Clark, 1979; Underhill, 1987). Examinees use artificial language to ââ¬Å"respond to tape-recorded questions situations the examinee is not likely to encounter in a real-life settingâ⬠(Clark, 1979:38). They may feel stressful while speaking to a microphone rather than to another person, especially if they are not accustomed to the laboratory setting (OLoughlin, 2001). Second, the communicative strategy and speech discourse elicited in these semi-direct SOPIs is quite different from that found in typical face-face interaction ââ¬â being more formal, less conversation-like (Shohamy, 1994). Candidates tend to use written language in tape-mediated test, more of a report or narration; while, they focus more on interaction and on delivery of meanings in OPI. Third, there are often technical problems that can result in poor quality recordings or even no recording in the SOPI format (Underhill, 1987). In conclusion, one cannot assume any equivalence between a face-to face test and a semi-direct test (Shohamy, 1994). It may be that they are measuring different things, different constructs, so the mode of test delivery should be adopted on the basis of test purpose, accuracy requirement, practicability, and impartiality (Shohamy, 1994). Stansfield (1991) proposes the OPI is more applicable to the placement test and evaluation test of the curriculum, while SOPI is more appropriate for large-scale test with requirement of high reliability. Marking Of Speaking Test Marking and scoring is a challenge in assessing second language oral proficiency.. Since only a few elements of the speaking skill can be scored objectively, human judgments play major roles in assessment. How to establish the valid, reliable, effective marking criteria scales and high quality scoring instruments have always been central to the performance testing of speaking (Luoma, 2004). It is important to have clear, explicit criteria to describe the performance, as it is important for raters to understand and apply these criteria, making it possible to score them consistently and reliably. For these reasons, rating and rating scales have been a central focus of research in the testing of speaking (Ferguson, 2009). Definition Of Rating Scales A rating scale, also referred to as a ââ¬Å"scoring rubricâ⬠or ââ¬Å"proficiency scaleâ⬠is defined by Davies et al as following (see Fulcher, 2003): à ·consisting of a series of band or levels to which descriptions are attached à ·providing an operational definition of the constructs to be measured in the test à ·requiring training for its effective operation Holistic And Analytic Rating Scales There are different types of rating scales used for scoring speech samples. One of the traditional and commonly used distinctions is between holistic and analytic rating scales. Holistic rating scales also are referred to as global rating. With these scales, the rater attempts to match the speech sample with a particular band whose descriptors specify a range of defining characteristics of speech at that level. A single score is given to each speech sample either impressionistically or by being guided by a rating scale to encapsulate all the features of the sample (Bachman Palmer, 1996). Analytic rating scales: They consist of separate scales for different aspects of speaking ability (e.g. grammar / vocabulary; pronunciation, fluency, interactional management, etc). A score is given for each aspect (or dimension), and the resulting scores may be combined in a variety of ways to produce a composite single overall score. They include detailed guidance to raters, and rich information that they provide on specific strengths and weakness in examinee performance (Fulcher, 2003). Analytic scales are particularly useful for diagnostic purposes and for providing a profile of competence in the different aspects of speaking ability (Ferguson, 2009). The type of scale that is selected for a particular test of speaking will depend upon the purpose of the test Validity And Reliability Of Speaking Test Bachman And Palmers Theories On Test Usefulness The primary purpose of a language test is to provide a measure that can be interpreted as an indicator of an individuals language ability (Bachman, 1990; Bachman and Palmer, 1996). Bachman and Palmer (1996) propose that test usefulness including six test qualitiesââ¬âreliability, construct validity, authenticity, interactiveness, impact (washback) and practicality. Their notion of usefulness can be expressed as in Figure2.3: Usefulness=Reliability + Construct validity + Authenticity + Interactiveness + Impact +Practicality These qualities are the main criteria used to evaluate a test. ââ¬Å"Two of the qualities reliability and validity are critical for tests and are sometimes referred to as essential measurement qualitiesâ⬠(Bachman Palmer, 1996:19), because they are the ââ¬Å"major justification for using test scores as a basis for making inferences or decisionsâ⬠(ibid). The definitions of types of validity and reliability will be presented in this section. Validity And Reliability Defining Validity The quotation from AERA (American Educational Research Association ) indicates: ââ¬Å"Validity is the most important consideration in test evaluation. The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores. Test validation is the process of accuà mulating evidence to support such inferences. A variety of inferences may be made from scores produced by a given test, and there are many ways of accumulating evidence to support any particular inference. Validity, however, is a unitary concept. Although evidence may be accumulated in many ways, validity always refers to the degree to which that evidence supports the inferences that are made from the score. The inferences regarding specific uses of a test are validated, not the test itself.â⬠(AERA et al., 1985: 9) Messick stresses that ââ¬Å"it is important to note that validity is a matter of degree, not all or none (Mess Reliability of Speaking Proficiency Tests Reliability of Speaking Proficiency Tests Introduction Testing, as a part of English teaching, is a very important procedure, not just because it can be a valuable source of information about the effectiveness of learning and teaching but also because it can improve teaching, and arouse the students motivation to learn. Testing oral proficiency has become one of the most important issues in language testing since the role of speaking ability has become more central in language teaching with the advent of communicative language teaching (Nakamura, 1993). However, assessing speaking is challenging (Luoma, 2004). Validity and reliability, as fundamental concerns and essential measurement qualities of the speaking test (Bachman, 1990; Bachman Palmer, 1996; Alderson et al, 1995), have aroused widespread attention. The validation of the speaking test is an important area of research in language testing. Test of oral proficiency just started in China 15 years ago, and there are a few very dominant tests. An increasing number of Chinese linguists are putting their attention and efforts on analysis of their validity and reliability. Institutions began to introduce speaking tests into English exams in recent years with the widespread promotion of communicative language teaching (CLT). Publications that deal with speaking tests within institutions provide some qualitative assessments (Cai, 2002). But there is relatively little research literature relating to the reliability and validity of such measures within a university context. (Wen, 2001). The College English Department at Dalian Nationalities University (DLNU) has been selected as one of thirty-one institutions of the College English Reform Demonstration Project in the Peoples republic of China. In College English (CE) course of DLNU, the speaking test is one of the four subtests of the final examination of English assessment. The examination uses two different formats. One is a semi-direct speaking test, in which examinees talk to microphones connected to computers, and have their speeches recorded for the teachers to rate afterwards. The other is a face-to-face interview. This research in this paper aims to ascertain the degree of the reliability and validity of the speaking tests. By analyzing the results of the research, teachers will become more aware of the validity and reliability of oral assessments, including how to improve the reliability and validity of speaking tests. I, as a language teacher, will gain insight into the operation of language proficiency te st, In order to better degree of reliability and validity of a particular test, I will also take other qualities of test usefulness into account when designing the language proficiency test., such as practicality and authenticity. Research questions: This study mainly addresses the questions of validity and reliability of the speaking test administered at DLNU. They are comprehensive concepts that involve analysis of test tasks, administration, rating criteria, examinee and testers attitudes towards the test, the effect of the test on teaching and teacher or learner attitudes towards learning the tests (Luoma, 2004). Therefore, the purpose of this study is to answer the following research questions: 1. Is the speaking test administered at DLNU a valid and reliable test? This question can involve the following two sub-questions: 1) To what extent is the speaking test administered at DLNU reliable? 2) To what extent is the speaking test administered at DLNU valid? 2. In what aspects and to what extent may the validity and reliability of the speaking test administered at DLNU be improved? Literature Review This chapter presents a theoretical framework of speaking construct, ways of testing speaking, marking of speaking test and the reliability and validity of speaking test, also introduces the situation of speaking test in China. Analyzing Speaking And Speaking Test The Nature Of Speaking Speaking, as a social and situation-based activity, is an integral part of peoples daily lives (Luoma, 2004). Testing second language speaking is often claimed to be a much more difficult undertaking than testing other second language abilities, capacities or competencies, skillsà ¼Ãâ Underhill, 1987). Assessment is difficult not only because speaking is fleeting, temporal and ephemeral, but also because of the comprehensibility of pronunciation, the special nature of spoken grammar and spoken vocabulary, as well as the interactive and social features of speaking (Luoma, 2004), because of the ââ¬Å"unpredictability and dynamic natureâ⬠of language itself (Brown, 2003). To have a clear understanding of what it means to be able to speak a language, we must understand that the nature and characteristics of the spoken language differ from those of the written form (Luoma, 2004; McCarthy OKeefe, 2004; Bygate, 2001) in its grammar, syntax, lexis and discourse patterns due to the nature of spoken language. Spoken English involves reduced grammatical elements arranged into formulaic chunk expressions or utterances with less complex sentences than written texts. Spoken English breaks the standard word order because the omitted information can be restored from the instantaneous context (McCarthy OKeefe, 2004; Luoma, 2004; Bygate, 2001; Fulcher, 2003). Spoken English contains frequent use of the vernacular, interrogatives, tails, adjacency pairs, fillers and question tags which have been interpreted as dialogue facilitators (Luoma, 2004; Carter McCarthy, 1995). The speech also contains a fair number of slips and errors such as mispronounced words, mixed sounds, and wrong words due to inattention, which is often pardoned and allowed by native speakers (Luoma, 2004). Conversations are also negotiable, unpredictable, and susceptible to social and situational context in which the talks happen (Luoma, 2004). The Importance Of Speaking Test Testing oral proficiency has become one of the most important issues in language testing since the role of speaking ability has become more central in language teaching with the advent of CLA (Nakamura, 1993). Of the four language skills (listening, speaking, reading, writing), listening and reading occur in the receptive mode, while speaking and writing exist in the productive mode. Understanding and absorption of received information are foundational while expression and use of acquired information demonstrate an improvement and a more advanced test of knowledge. A lot of interests now in oral testing is partly because second language teaching is more than ever directed towards the speaking and listening skillsà ¼Ãâ Underhill, 1987). Language teachers are engaged in ââ¬Å"teaching a language through speakingâ⬠(Hughes, 2002:7). On one hand, spoken language is the focus of classroom activity. There are often other aims which the teacher might have: for instance, helping the student gain awareness of practice in some aspect of linguistic knowledge (ibid). On the other hand, speaking test, as a device for assessing the learners language proficiency also functions to motivate students and reinforce their learning of language. This represents what Bachman (1991) has called an ââ¬Å"interfaceâ⬠between second language acquisition (SLA) and language testing research. However, assessing speaking is challenging, ââ¬Å"because there are many factors that influence our impression of how well someone can speak a languageâ⬠(Luoma, 2004:1) as well as unpredictable or impromptu nature of the speaking interaction. The testing of speaking is difficult due to practical obstacles and theoretical challenges. Much attention has been given to how to perfect the assessment system of oral English and how to improve its validity and reliability. The communicative nature of the testing environment also remains to be considered (Hughes, 2002). The Construct Of Speaking Introduction To Communicative Language Ability (CLA) A clear and explicit definition of language ability is essential to language test development and use (Bachman,1990). The theory on which a language test is based determines which kind of language ability the test can measure, This type of validity is called construct validity. According to Bachman (1990:84), CLA can be described as ââ¬Å"consisting of both knowledge or competence and the capacity for implementing or executing that competence in appropriate, contextualized communicative language useâ⬠. CLA includes three components: language competence, strategic competence and pyschophysiological mechanisms. The following framework (figure 2.1) shows components of communicative language ability in communicative language use (Bachman,1990:85). Knowledge Structures Language Competence Knowledge of the world Knowledge Of Language Strategic Competence Psychophysiological Mechanisms Context Of Situation This framework has been widely accepted in the field of language testing. Bachman (1990:84) proposes that ââ¬Å"language competenceâ⬠essentially refers to a set of specific knowledge components that are utilized in communication via language. It comprises organizational and pragmatic competence. Two areas of organizational knowledge that Bachman (1990) distinguishes are grammatical knowledge and textual knowledge. Grammatical knowledge comprises vocabulary, syntax, phonology and graphology, and textual knowledge, comprises cohesion and rhetorical or conversational organization. Pragmatic competence shows how utterances or sentences and texts are related to the communicative goals of language users and to the features of the langue-use setting. It includes illocutionary actsà ¼Ã
âor language functions, and sociolinguistic competence, or the knowledge of the sociolinguistic conventions that govern appropriate language use in a particular culture and in varying situations in t hat culture (Bachman, 1987). Strategic competence refers to mastery of verbal and nonverbal strategies in facilitating communication and implementing the components of language competence. Strategic competence is demonstrated in contextualized communicative language use, such as socialcultural knowledge, real-world knowledge and mapping this onto the maximally efficient use of existing language abilities. Psychophysiological competence refers to the visual and auditory skill used to gain access to the information in the administrators instructions. Among other things, psychophysiological competence includes things like sound and light. Fulchers Construct Definition To know what to assess in a speaking test is a prime concern. Fulcher (1997b) points out that the construct of speaking proficiency is incomplete. Nevertheless, there have been various attempts to reflect the underlying construct of speaking ability and to develop theoretical frameworks for defining the speaking construct. Fulchers framework (figure 2.2) (Fulcher, 2003: 48) describes the speaking construct. As Fulcher (2003) points out that there are many factors that could be included in the definition of the construct: Phonology: the speaker must be able to articulate the words, have an understanding of the phonetic structure of the language at the level of the individual word, have an understanding of intonation, and create the physical sounds that carry meaning. Fluency and accuracy: these concepts are associated with automaticity of performance and the impact on the ability of the listener to understand. Accuracy refers to the correct use of grammatical rules, structure and vocabulary in speech. Fluency has to do with the ââ¬Ënormal speed of delivery to mobilise ones language knowledge in the service of communication at relatively normal speed. The quality of speech needs to be judged in terms of the gravity of the errors made or the distance from the target forms or sounds. Strategic competence: this is generally thought to refer to an ability to achieve ones communicative purpose through the deployment of a range of coping strategies. Strategic competence includes both achievement strategies and avoidance strategies. Achievement strategies contain overgeneralization/morphological creativity. Learners transfer knowledge of the language system onto lexical items that they do not know, for example, saying ââ¬Å"buyedâ⬠instead of ââ¬Å"boughtâ⬠, Speakers also learn approximation: learners replace an unknown word with one that is more general or they use exemplification, paraphrasing (use a synonym for the word needed), word coinage (invent a new word for an unknown word), restructuring (use different words to communicate the same message), cooperative strategies (ask for help from the listener) , code switching (take a word or phrase from the common language with the listener in order to be understood) and non-linguistic strategies (use gestur es or mime, or point to objects in the surroundings to help to communicate). Avoidance or reduction strategies consist of formal avoidance (avoiding using part of the language system) and functional avoidance (avoiding topical conversation). Strategic competence includes selecting communicative goals and planning and structuring oral production so as to fulfill them. Textual knowledge: competent oral interaction involves some knowledge of how to manage and structure discourse, for example, through appropriate turn-taking, opening and closing strategies, maintaining coherence in ones contributions and employing appropriate interactional routines such as adjacency pairs. Pragmatic and sociolinguistic knowledge: effective communication requires appropriateness and the knowledge of the rules of speaking. A range of speech acts, politeness and indirectness can be used to avoid causing offence. Ways Of Testing Speaking Clark (1979) puts forward a theoretical basis to discriminate three types of speaking tests: direct, semi-direct and indirect tests. Indirect tests belong to ââ¬Å"procommunicativeâ⬠era in language testing, in which the test takers are not actually required to speak. It has been regarded as having the least validity and reliability, while the other two formats are more widely used (OLoughlin, 2001). In this section, the characteristics, advantages and disadvantages of the direct and semi-direct test are presented, The Oral Proficiency Interview Format One of the earliest and most popular direct speaking test formats, and one that continues to exert a strong influence, is the oral proficiency interview (OPI) ââ¬âdeveloped originally by the FSI (Foreign Service Institute) in the United States in the 1950s and later adopted by other government agencies. It is conducted with individual test-taker by a trained interviewer, who assesses the candidate using a global band scale (OLoughlin, 2001). It typically begins with a warm-up discussion of a few easy questions, such as getting to know each other or talking about the days events. Then the main interaction contains the pre-planned tasks, such as describing or comparing pictures, narrating from a picture series, talking about a pre-announced or examiner-selected topic, or possibly a role-play task or a reverse interview where the examinee asks question of the interviewer (Luoma. 2004). An important example of this type of test is the speaking component of the International English L anguage Testing System (IELTS), which is adopted in 105 different countries around the world each year. The Advantage Of An Interview Format The oral interview was recognized as the most commonly used speaking test format. Fulcher (2003) suggests that it is partly because the questions used can be standardized, making comparison between test takers easier than when other task types are used. Using this method, the instructor can get a sense of the oral communicative competence of students and can overcome weakness of written exams, because the interview, unlike written exams, ââ¬Å"is flexible in that the questions can be adapted to each examinees performance, and thus the testers have more controls over what happens in the interactionâ⬠(Luoma, 2004:35). It is also relatively easy to train raters and obtain high inter-rater reliability (Fulcher, 2003). The Disadvantage Of An Interview Format However, concern and skepticism exist about whether it is possible to test other competencies or knowledge because of the nature of the discourse that the interview produces (van Lier, 1989). a. Issue of time For the instructor, time management can be quite an issue. For instance, using a two-hour period for exams for 20 students means each student is allowed only six minutes for testing. This includes the time needed to enter the room and adjust to the setting. With such a time limit the student and instructor can hardly have any kind of normal real-world conversation. b. Issue of asymmetrical relationship The asymmetrical relationship between examiners and candidates elicits a form of inauthentic and limited socio-cultural contexts (van Lier, 1989; Savignon, 1985; Yoffe, 1997). Yoffe (1997) commented on ACTFL (American Council on the Teaching of Foreign Languages) OPI that the tester and the test-taker are ââ¬Å"clearly not in equal positionsâ⬠(Yofee, 1997). The asymmetry is not specific to the OPI but is inherent in the notion of an interview as an exchange wherein one person solicits information in order to arrive at a decision while the interlocutor produces what he or she perceives as most valued. The interviewee is, in most cases, acutely aware of the ramifications of the OPI rating and is, consequently, under a great deal of stress. Van Lier (1989) also challenges the validity of OPI in terms of the asymmetry between them because ââ¬Å"the candidate speaks as to a superior and is unwilling to take the initiativeâ⬠(van Lier, 1989). Under the unequal relationship, the speech discourse, such as turn ââ¬âtaking, topic nomination and development, and repair strategies are all substantially different from normal conversational exchanges (see van Lier 1989). c. Issue of interviewer variation Given the fact that the interviewer has considerable power over the examinee in an interview, concerns have been aroused about the effect of the interlocutor (examiner) on the candidates oral performance. Different interviewers vary in their approaches and attitudes toward the interview. Brown (2003) warns the danger of such variation to fairness. OSullivan (2000) conducts an empirical study that indicated learners perform better when interviewed by a woman, regardless of the sex of the learner. Underhill (1987:31) expresses his concern on the unscripted ââ¬Å"flexibilityâ⬠¦ means that there will be a considerable divergence between what different learners say, which makes a test more difficult to assess with consistency and reliability.â⬠Testing Speaking In Pairs There has been a shift toward a paired speakers format: two assessors examine two candidates at a time. One assessor interacts with the two candidates and rates them on a global scale, while the other does not take part in the interaction and just assessesusing an analytic scale. The paired oral test has been used as part of large-scale, international, standardized oral proficiency tests since the late 1980s (Ildikà ³, 2001). Key English Test (KET), Preliminary English Test (PET), First Certificate in English (FCE) and Certificate in Advanced English (CAE) make use of the paired format. In a typical test, the interaction begins with a warm-up, in which the examinees introduce themselves to the interlocutor, followed by two pair interaction task. The talk may involves comparing two photographs by each candidate at first, such as in Cambridge First Certificate (Luoma, 2004), then a two-way collaborative task between the two candidates based on more photographs, artwork or computer gra phics, and ends up with a three-way discussion with the two examinees and the interlocutor about a general theme that is related to the earlier discussion. The advantages of the paired interview format Many researchers claim that the paired format is preferable to OPI. The reasons are: a. The changed role of the interviewer frees up the instructors in order to pay closer attention to the production of each candidate than if they are participants themselves (Luoma, 2004). b. The reduced asymmetry allows more varied interaction patterns, which elicits a broader sample of discourse and increased turn-takings than were possible in the highly asymmetrical traditional interview (Taylor, 2000). c. The task type based on pair-work will generate a positive washback effect on classroom teaching and learning (Ildiko, 2001). In the case of the instructor following Communicative Language Teaching (CLT) methodology, where pair work may take up a significant portion of a class, it would be appropriate to incorporate similar activities in the exam. In that way the exam itself is much better integrated into the fabric of the course. Students can be tested for performance related to activities done in class. There may also be benefits in regards to student motivation. If students are aware that they will be tested on activities similar to the ones done in class, they may have more incentive to be attentive and use class time effectively. The disadvantages of the paired interview format There are, however, also concerns voiced regarding the paired format. a. Mismatches between peer interactants The most frequently raised criticisms against the paired speaking test relate to various forms of mismatches between peer interactants (Fulcher, 2003). Ildiko (2001) points out that when a candidate has to work with an incomprehensible or uncomprehending peer partner, it may negatively influence the candidates performance. As a consequence, in such cases it is quite impossible to make a valid assessment of candidates abilities. b. Lack of familiarity between peer interactants The extent to which this testing format actually reduces the level of anxiety of test-takers compared to other test formats remains doubtful (Fulcher, 2003). OSullivan (2002) suggests that the spontaneous support offered by a friend positively reduces anxiety and task performance under experimental conditions. However, the chances are quite high that the examinee will meet with strangers as his or her peer interactant. It is hard to imagine how these strangers can carry out some naturally flowing conversations. Estrangement, misinterpretation and even breakdown may occur during their talk. c. Lack of control of the discussion Problems are generated if the examiner loses control of the oral task (Luoma, 2004). When the instructions and task materials are not clear enough to facilitate the discussion, the examinees conversation may go astray. Luoma (2004) points out that testers often feel uncertain about what amount of responsibility that they should give to the examinees. Furthermore, examinees do not know what kind of performance will earn them good results without the elicitation of the examiner. When one of the examinees has said too little, the examiner ought to monitor and jump in to give help when necessary. Semi-Direct Speaking Tests The term ââ¬Å"semi-directâ⬠is employed by Clark (1979:36) to describe those tests that are characterized ââ¬Å"by means of tape recordings, printed test booklets, or other ââ¬Ënon-human elicitation procedures, rather than through face-to-face conversation with a live interlocutor.â⬠Appearing during 1970s, and being an innovative adaptation of the traditional OPI, the semi-direct method normally follows the general structure of the OPI and makes an audio-recording of the test takers performance which is later rated by one or more trained assessors (Malone, 2000). Examples of the semi-direct type used in the U.S.A. are the simulated oral proficiency interviews (SOPI) and the Test of Spoken English 2000 (TSE) (Ferguson, 2009). Examples in U.K. include the Test in English for Education Purpose (TEEP) and the Oxford-ARELS Examinations (OLoughlin, 2001). Another mode of delivery is testing by telephone as in the PhonePass test (the test mainly consists of reading sentenc es aloud or repeating sentences), or even video-conferencing (Ferguson, 2009). The Advantages Of The Semi-Direct Test Type First, the semi-direct test is more cost efficient than direct tests, because many candidates can be tested simultaneously in large laboratories and administered by any teacher, language lab technician or aide in a language laboratory where the candidate hears taped questions and has their responses recorded (Malone, 2000). Second, the mode of testing is quite flexible. It provides a practical solution in situations where it is not possible to deliver a direct test (OLoughlin, 2001), and it can be adapted to the desired level of examinee proficiency and to specific examinee age groups, backgrounds, and professions (Malone, 2000). Third, semi-direct testing represents an attempt to standardize the assessment of speaking while retaining the communicative basis of the OPI (Shohamy, 1994). It offers the same quality of interview to all examinees, and all examinees respond to the same questions so as to remove the effect that the human interlocutor will have on the candidate (Malone, 2000). The uniformity of the elicitation procedure greatly increases the reliability of the test. Some empirical studies (Stansfield, 1991) show high correlations (0. 89- 0. 95) between the direct and semi-direct tests, indicating the two formats can measure the same language abilities and the SOPI can be the equivalent and surrogate of the OPI. However, there are also disadvantages. The Disadvantages Of The Semi-Direct Test Type First, the speaking task in semi-direct oral test is less realistic and more artificial than OPI (Clark, 1979; Underhill, 1987). Examinees use artificial language to ââ¬Å"respond to tape-recorded questions situations the examinee is not likely to encounter in a real-life settingâ⬠(Clark, 1979:38). They may feel stressful while speaking to a microphone rather than to another person, especially if they are not accustomed to the laboratory setting (OLoughlin, 2001). Second, the communicative strategy and speech discourse elicited in these semi-direct SOPIs is quite different from that found in typical face-face interaction ââ¬â being more formal, less conversation-like (Shohamy, 1994). Candidates tend to use written language in tape-mediated test, more of a report or narration; while, they focus more on interaction and on delivery of meanings in OPI. Third, there are often technical problems that can result in poor quality recordings or even no recording in the SOPI format (Underhill, 1987). In conclusion, one cannot assume any equivalence between a face-to face test and a semi-direct test (Shohamy, 1994). It may be that they are measuring different things, different constructs, so the mode of test delivery should be adopted on the basis of test purpose, accuracy requirement, practicability, and impartiality (Shohamy, 1994). Stansfield (1991) proposes the OPI is more applicable to the placement test and evaluation test of the curriculum, while SOPI is more appropriate for large-scale test with requirement of high reliability. Marking Of Speaking Test Marking and scoring is a challenge in assessing second language oral proficiency.. Since only a few elements of the speaking skill can be scored objectively, human judgments play major roles in assessment. How to establish the valid, reliable, effective marking criteria scales and high quality scoring instruments have always been central to the performance testing of speaking (Luoma, 2004). It is important to have clear, explicit criteria to describe the performance, as it is important for raters to understand and apply these criteria, making it possible to score them consistently and reliably. For these reasons, rating and rating scales have been a central focus of research in the testing of speaking (Ferguson, 2009). Definition Of Rating Scales A rating scale, also referred to as a ââ¬Å"scoring rubricâ⬠or ââ¬Å"proficiency scaleâ⬠is defined by Davies et al as following (see Fulcher, 2003): à ·consisting of a series of band or levels to which descriptions are attached à ·providing an operational definition of the constructs to be measured in the test à ·requiring training for its effective operation Holistic And Analytic Rating Scales There are different types of rating scales used for scoring speech samples. One of the traditional and commonly used distinctions is between holistic and analytic rating scales. Holistic rating scales also are referred to as global rating. With these scales, the rater attempts to match the speech sample with a particular band whose descriptors specify a range of defining characteristics of speech at that level. A single score is given to each speech sample either impressionistically or by being guided by a rating scale to encapsulate all the features of the sample (Bachman Palmer, 1996). Analytic rating scales: They consist of separate scales for different aspects of speaking ability (e.g. grammar / vocabulary; pronunciation, fluency, interactional management, etc). A score is given for each aspect (or dimension), and the resulting scores may be combined in a variety of ways to produce a composite single overall score. They include detailed guidance to raters, and rich information that they provide on specific strengths and weakness in examinee performance (Fulcher, 2003). Analytic scales are particularly useful for diagnostic purposes and for providing a profile of competence in the different aspects of speaking ability (Ferguson, 2009). The type of scale that is selected for a particular test of speaking will depend upon the purpose of the test Validity And Reliability Of Speaking Test Bachman And Palmers Theories On Test Usefulness The primary purpose of a language test is to provide a measure that can be interpreted as an indicator of an individuals language ability (Bachman, 1990; Bachman and Palmer, 1996). Bachman and Palmer (1996) propose that test usefulness including six test qualitiesââ¬âreliability, construct validity, authenticity, interactiveness, impact (washback) and practicality. Their notion of usefulness can be expressed as in Figure2.3: Usefulness=Reliability + Construct validity + Authenticity + Interactiveness + Impact +Practicality These qualities are the main criteria used to evaluate a test. ââ¬Å"Two of the qualities reliability and validity are critical for tests and are sometimes referred to as essential measurement qualitiesâ⬠(Bachman Palmer, 1996:19), because they are the ââ¬Å"major justification for using test scores as a basis for making inferences or decisionsâ⬠(ibid). The definitions of types of validity and reliability will be presented in this section. Validity And Reliability Defining Validity The quotation from AERA (American Educational Research Association ) indicates: ââ¬Å"Validity is the most important consideration in test evaluation. The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores. Test validation is the process of accuà mulating evidence to support such inferences. A variety of inferences may be made from scores produced by a given test, and there are many ways of accumulating evidence to support any particular inference. Validity, however, is a unitary concept. Although evidence may be accumulated in many ways, validity always refers to the degree to which that evidence supports the inferences that are made from the score. The inferences regarding specific uses of a test are validated, not the test itself.â⬠(AERA et al., 1985: 9) Messick stresses that ââ¬Å"it is important to note that validity is a matter of degree, not all or none (Mess
Wednesday, November 13, 2019
Music Censorship :: essays research papers
Music Censorship: The Circumstances Causing the Controversy Imagine, if you will, a world where we are told what music to sing, what music to play, and even what we may listen to in the privacy of our own homes. That world already exists as a reality in more countries that you might imagine, and that very reality is knocking on our door: In the USA, lobbying groups have succeeded in keeping popular music off the concert stage, out of the media, and off of the shelves. Of course, if presented with this contingency, any one of us would declare how horrible this reality would be. Why then, do we hear about citizens and organizations fearfully protesting the apparently-so-inalienable right to express ourselves though music. As a society we want our young people to be literate, thoughtful, and caring human beings, however we also attempt to control what they read, listen to, and seeââ¬âand ultimately what they think and care about. One can understand the instinct to need to ââ¬Å"protect" children from dangerous or disturbing ideas and information, but this combination of the multiplicity of values and the concern for young peopleââ¬â¢s minds keeps censorship alive in school, public libraries, and other common places. ââ¬Å"We favor music censorship? No, thatââ¬â¢s not true,â⬠says Wendy Wright of an organization, Concerned Women for America, on the enemy list of virtually all other anti-censorship supporters. ââ¬Å"Censorship means that the government restrains speech. We are in favor of those in the music industry using common sense: In essence, that they donââ¬â¢t promote behavior or activities that they wouldnââ¬â¢t want committed against their wife or children.â⬠CWFA sees music the music in question as having potential to cultivate certain ideas in the minds of the youth.ââ¬Å"The argument that it does not affect kids, that it does not promote similar behavior, is ridiculous. If that were true, they would not advertise or rely on marketing ââ¬â both fields depend on the fact that humans can be enticed into doing something that they wouldnââ¬â¢t have thought up on their own.â⬠In our community, there are mixed views about this issue just as there are in the wider world setting where this conflict is now unfolding: ââ¬Å"I think there should definitely be some censorship, like with the movies where there is a rating system. The music thatââ¬â¢s out now is too graphic for younger kids to be listening too and its beginning to evidently corrupt our society.
Subscribe to:
Posts (Atom)