NPtel Lec3. PDF - The Psychology of Language

Document Details

Uploaded by Deleted User

Indian Institute of Technology Guwahati

Dr. Naveen Kashyap

Tags

psycholinguistics language evolution research methods psychology

Summary

This document is a lecture from a course on the psychology of language. It discusses research methods in psycholinguistics, including Event-Related Potentials (ERPs). Focus is on how the brain processes language, using examples such as the N400 and P300 waves, and their connection to sentence comprehension and meaning.

Full Transcript

The Psychology of Language Dr. Naveen Kashyap Department of Humanities of Social Sciences Indian Institute of Technology-Guwahati Module No....

The Psychology of Language Dr. Naveen Kashyap Department of Humanities of Social Sciences Indian Institute of Technology-Guwahati Module No. #01 Lecture No. #03 Doing Research in Language - I Hello friends, welcome back to this course on, the Psychology of Language. This is, lecture number 3. And, what we are going to do in this lecture, is we will look at, how do we do research in Language Sciences, or in Psycholinguistics. Now, when I say, how do we do research in psycholinguistics, the methods that I am going to tell you here, is not explicitly the one, that we use in language studies. Now, these methods are very well commonly used across, a number of other scientific disciplines. So, some of the principles, some of the facts that I am going to reveal here, is common to most of the Behavioural Sciences, or Social Sciences. Before we start our journey today, and start looking at, how do we do research, in the science of language, let us do a quick recap, of what we did in the first two lectures, Lecture number 1, and Lecture number 2. Now, Lecture number 1, and Lecture number 2, we focused on, how language evolved. And, even before that, we started looking at, the nature of language, what is it. So, the best way to define language is, basically a medium of communication, between people. A medium of expression of ideas, between two entities. So, we started off, the beginning of this lecture series, by defining, what is the definition of language. We started off, by pinpointing, how communication and language are different. And, we started seeing, or we start our lectures, by looking at the most primitive form of language known to human beings, which is animal communication. So, we started looking at, Animal Communication Systems. We started looking at, why animals communicate, and what are the communication systems existed in animals. From there, we devised or underline certain rules, on which animal communication systems really work. We looked at reasons like, food, finding mates, or pointing out enemies. And, these are the reasons why, animal use a communication system. We then moved on, to the development of human language. How human language, develop from animal communication systems. So, there, we looked example of, laugh as a language. So, how laughing, or smiling, is a model of a language.so, we took that as a model, and we describe, what it is. And, how laugh never means, that something funny is been spoken. But, laugh has multiple meanings. And, that is what we try to, or try to explain through, how language human language is different from, animal communication. Animal communication pinpoints, here and now things, are very basic ideas. Whereas, if you look at the laugh, that we have in humans, is not actually pointing out the fact that, something funny is being said. It can sometimes mean, I like you. It can sometimes mean that, I support your views or your ideas, or so many other things. So, the simple laugh that we have, a smile that we have, can mean, so many things. And so, we took this model system, and explain to you, how animal languages, or animal communications differ from, human languages. Then, we looked at various features of the human language system, and how do they vary from, animal communication system. For example, the nature of sentence structure, the nature of our arbitrary symbols, and the nature of production. So, this is how because, animals can communicate on limited ideas. And, their way of communication, is not structured in new ideas, and not governed by certain rules. So, we pointed out that. And, we also saw that, how arbitrary symbols, which is words in different languages, mean the same concept, or mean the same idea, that we are expressing. Then, we also looked at, how human languages progress. And there, we pointed out at something called, duality of patterning. We saw, how basic units, add up, to form longer languages. So basically, how basic units combine together, to give bigger units in languages. We then looked at, the standard pyramidal structure of language, in terms of the Phenomes, the Morphemes the words, the sentence, and the discourse. The syntax, semantics, and the discourse. So, we looked at, how this is built about. We pointed out, or look briefly into the fact, of how language evolved. The evolution of language, how did it start. So, we looked at the proto humans, the Neanderthal man, the Homo sapiens. And, how they developed a proto language. And, how this proto-language developed into the, fully formed language, that we think about. And, we also gave enough evidences, to point out, that this kind of revolution would have been possible. The very existence of pidgin, which is a language, which has limited scope, but then can express ideas, among two communities, which are not common in language, gives a strong support to the fact, that language would have evolved, from the proto-language. We looked at the process of recursion, which is very basic to the idea of language, how sentences, or longer sentences form, so, the same set of rules, or the same set of structure, is repeated, to form longer sentences. So, we looked at that, as well. Then, we looked at, how syntaxes are formed. And, we looked at, certain evidences of, both the continuity and discontinuity of Theory of language. Continuity Theory of language believes that, language developed as a gradual process, through several stages from animals, to humans, and to higher humans. Discontinuity Theory believes that, out of a single mutation, language evolved very rapidly. And, so we saw, how these theories are, what they prompt, and what kind of evidences, they are based on. And then, we looked at, certain kind of reasoning, or evidences, for the fact that, language evolved in a sequential manner, through a fossilised structures. So basically, how they evolved, or how they developed in a chain sequence, right from the basic communication, that the proto-humans were using, to the modern language. So, that is what we did in the first two lectures. We looked at, the evolution. We looked at, how they are different from, the present language system is different from animal language systems. And, we provided evidences, look at some primary theories of language development, in that kind of thing And, having said that, the first thing, that we need to know, in any course, is to know, what are the tools, and what are the techniques, which are available to conduct research, in that particular area, or in that particular field. And, that is what, we are absolutely now going to do, in this part of the course. We are going to see, the tools and techniques, which are available to us, which help us, in doing research in language. (Refer Slide Time: 08:10) So, I will start with a brief story, and I will show to you, how language research is done. And then, we look at the procedures and techniques, which are available, for doing research. Now, in this story, we first start off, by describing, what is an ERP, an Event Related Potential. And so, these ERP’s or Event Related Potential, are changes in the brain electrical potential, due to changes in perception of certain stimuli. Now, we will take the story of how, the N400, which is a negative going peak, how this was developed, or this was found out, by someone called Marta Kutas, in the lab laboratory of Hillyard. So, both are prominent psycholinguist. And, N400 was a waveform, which was developed by Marta Kutas. And, before the development of N400, the P300 was the signature of language comprehension, or language understanding. Now, let us look at a little bit of detail of, what the ERP is, and why it is needed, that kind of a thing. So, the brain, at any point of time, is doing a lot of activities. If you want to know, how the brain is processing, certain stimuli, we have to study, or we have to look at, the electrical changes, which is happening in the brain, while a stimuli is being presented, or after the presentation of a particular stimuli, and compare it with a state, when no stimuli is presented. And then, what we do is, with cut of those section of the EEG, which is of interest to us. So normally, if this is what my EEG is, a normal EEG looks like, and if I present my stimulus here. And, in other case, I do not present a stimulus. What I do is, I take, let us say, if my area of interest is 1000 milliseconds. And, I believe that, within a thousand millisecond, some changes will happen. What I will do is, I will take this EEG, or this kind of EEG, by presenting the stimulus to, let us say, 1000 people. And, then again, presenting no stimulus to 1000 people. I will average them. And, from this average, I will be able to separate, the basic waveforms in the brain, at the default stage, and cut-off those, or find out those changes in electrical potential, which are coming from the presentation of the stimulus. How do we do that? We compare, no stimulus versus stimulus condition. And so, we will be able to know, how does brain, or what areas of the brain, or what regions of the brain, what processes in the brain, are active, while doing a particular processing, or while doing a particular cognitive task. And so, what was happening is, the most in in sentence comprehension. Now, sentence compression is something, that we will see, in in the future coming classes. So basically, how does the brain read a sentence. How does the brain extract, meaning from a sentence? Now, what was happening is that, it was well known, before Marta Kutas came in and gave, and found out that there is something called N400. Before that, a lot of research was done. And, those in EEG, and it was found out that, there is something called a P300 wave. A P300 wave, is a positive going wave. 300 milliseconds. So, this is where, the stimulus is presented. 300 milliseconds, after the presentation of the stimulus, what I see, is a deviation, on the positive direction. Of course, EEG’s are read in the wrong side. So, this is my positive, and this is my negative. And so, this deviation, after 300 milliseconds, basically suggests. Since, this is the average deviation, that I get. And so, this basically explains, so this basically tells me, about one common feature of the stimuli of interest. The stimuli of interest was, how the brain processes, those sentences, in which, something unexpected happens. And so, the question of interest here was, processing a simple statement. The statement that that was processed is, she put on her high heel. As we will note, that till here, it is in small letters. And then, suddenly, the shoe comes in capital letters. And so, this is what, the deviation is. And, for this kind of deviations, or, oddball, as we call them, the brain reveals a P300 um deflection. This P300 deflection says that, when um when the brain comes to this part, it is able to notice, the difference between capitals and small. So, this is, in terms of the sentence structure. Syntactic structures, the sentence structure. But, what happens when the meaning is wrong, in a sentence. For example, let us look at this. So, she spreads the warm bread with socks. Now, as you can see, the moment you read the sentence, you come to know that, this is wrong. They should not be there. And this is called the closure effect. And so, what it really means is that, people are able to predict, what kind of words will come next. We will look at these, in in the coming lectures. So, people have this ability of predicting, what words come next. And so, as soon as a sentence like this is given, the brain is able to notice, this difference. Now, this difference, as you can see, the difference here is in terms of, sentence structure. So, it is capitals and it is small. But here, all are in capital. The problem is, in terms of meaning. And so, when this happens, Marta Kutas found out, that she is not getting a P300, rather what she was getting here is that, right from the start of the stimulus, 400 milliseconds past the stimulus, she was getting a, if this was my EEG, an average EEG, across multiple subjects, this is what, she was getting, a deflection in the negative. So, it is called, N400. A negative deflection, 400 milliseconds, past the presentation of the stimulus, or this kind of words. So, basically then, if structure-wise differences are there, the brain gives a positive P300 deflection. But, if a meaning related change is there, a meaning related problem occurs in a sentence, a N400 is resulted, or the deflection is in the 400, negative side. Now, what does this really mean, or why does is of interest. It is of interest, because the brain processes, syntactic structures, and semantic forms, in different ways. This was the first time, that we are able to see that, brain processes the meaning and structures of sentences, in different ways. This is one kind of research, that we do, in language. And so, this is an ERP, and easy research that we do, in language. And so, from there on, these two basic, the P300, P300, which is called the oddball component of the EEG, and the N400, which is the semantic component, marking the semantic component, are the two primary components, which have been widely used, in sentence, production, sentence, comprehension. Now, there is something called, N100 also. But, this is an intentional component. So, we are not bringing it here. So, why all these things. Which was to show you, what kind of research will be, possible in language. And, what kind of things are possible in language. Now, let us look at, how is basically research done in language. And, as I said, this principle that I am going to tell you here, is not common to just language studies, or psycholinguistics, it can be used with, any behavioural or social sciences. (Refer Slide Time: 17:44) So, the first thing, before we do research, in in any behavioural science, or in language, for specifically this matter, we need to explain, or we need to understand, what a Theory is. Before finding up a problem, or before doing research in language, we have to first look at a Theory because, a gap in the Theory, will provide you a problem of interest, on which you want to do a research. So, what is Theory, then. What is the definition of a Theory? A Theory is believed to provide, a conceptual framework, for explaining a set of observations. So, it is a conceptual framework, which explains a set of observations. And, that is, that is what, we need to have here. Basically, the Theory, its research evidence, or its kind of a proposal, a background knowledge, which gives the reason, or which provides the reason, or evidence, for any kind of observations, that we have. And so, the observation that we saw, right now, the N400 and the P300, there are lots of theories which says that, there are different regions of the brain, that process it, and there are different time frames. Now, the P400, the N400, and P300, are two things. The P300, is more of an intentional component, and so meaning is not derived. And so, this happens, very early on. But, the N400 requires the brain, to exerts the meaning of the word, the concept of the word. And, for that, the abstract concept, are the dilemma of the word. And, for that particular reason, what it has to do is, this kind of activities happen, late in the processing. It happens at the temporal lobe, or the medial temporal lobe. And so, it will happen late. And also, there will be, a positive deflection, and the negative deflection. So basically, the Theory gives you the reason. So, the medial temporal lobe, is the secondary structure, or where the memories are there, or where semantic memory is stored. And so, for this kind of variations, or this kind of verification, in terms of meaning, is only semantic memory. Whereas, looking at, just the capitals and small, is more or less of an, attentional feature. And so, that is why, it happened early on. And so, this is the kind of Theory, that we can give for, the kind of research that has been displayed, or that I have explained to you, before starting this lecture. So, Theory gives the reason, or the or the evidence, or the baseline on which, research is done. Now, it should make, predictions about future. Now, what the Theory does, is not only explain a set of observations, it also predicts the future observations. So basically, if a set of observation, if an experiment is done, a Theory provides it enough evidence of, why this is coming. The Theory, does not only do that. It also gives you enough reason, or it also gives you the line of approach, to do newer set of experiments, to predict, what future observations can come, or what make predictions about, if you vary something in the in the initial experiment, what kind of results are you going to get. And, these experiments, that could be tested in experiment. So, this is what a Theory does. The two main purpose of Theory, explaining observations, and explaining future observations, and how this can be explained by experiments. Now, after the Theory, we come to know about, or we come at the level of, what a Hypothesis is. So, what is a Hypothesis? Hypothesis, generally is a tentative solution to a problem, or a prediction, which is based on a Theory, about a solution to a problem. So, let us say, I have a problem. The problem is, how the brain interprets meaning, and syntactic ends, and syntactic rules, at the meaning level, and at the surface level, of sentence processing. If I have a Theory, that there are two different ways in which, the brain handle that. Hypothesis or predictions, which basically the solutions, to the fact that, how the brain is actually doing it, or what are the possible solutions, to this problem. So, prediction, that is derived from a Theory, is basically known as a Hypothesis. So basically, what a Hypothesis is? It is a prediction, which is derived from a Theory. And so, these are tentative solutions. Tentative solutions, in the sense that, this may not be the correct solution, but this is a possible solution. Once a Hypothesis is laid, once a solution is laid, what the researcher then does, is collects data, based on the prediction of the Theory, and then verifies the Hypothesis, or test the Hypothesis, to test the Theory. So, whether the Theory is correct or not, how it is done. We collect data, in support of the Theory, or if the particular Theory suggests something, we collect some data, which the Theory says, or which the Theory predicts that, this is what the particular experiment is going to say. So, we collect data in its response, test the data, and test the Hypothesis. If the Hypothesis is verified, or if the Hypothesis is supported by the data, then we say, the Theory is correct, otherwise we bound to reform the Theory, or we re-formulate the Theory. Another interesting thing, in terms of research, or scientific research, is falsification criteria. Now, what is the falsification criteria. Now, the Hypothesis, which is derived from a Theory, should have the possibility, or it should have the possibility, the probability, of being rejected. Hypothesis are, as I said, these are tentative solutions. So, any Hypothesis, which has been said, or which have been proposed, they should always have a chance to be rejected, only then, it is a scientific research. So, Theory must make predictions, that can be disconfirmed by data. If we make Hypothesis, which is always confirmed by data, then it is not a right way of doing research. So, basically any Hypothesis that we predict, should always have the possibility, of being disconfirmed. And, this basically is called, the falsification, or the falsifiability criteria of research. So, any Hypothesis that we make, this is a viable solution, but this is a tentative solution, which basically means that, the chances are that, we can always reject this particular Hypothesis, and then reconfirm the Theory, or predict more Hypothesis out of it. And, that is why, when we do research, we never have one Hypothesis. We have, multiple Hypothesis, to test. Because, there are multiple solutions. Right. So, we do not have just one Hypothesis, to go about and say that, this is correct. Can never prove a Theory true, but can prove it false. Hypothesis gives the possibility, or a probability, that a particular Theory is correct. But, it never says that, it is 100% correct. We never have the probability one. But, we can have a probability of zero. We can always say, that the Theory is wrong, in terms of the Hypothesis. But, the Hypothesis can never with, full evidence or conformity, say that the Theory is, or is, truly correct. Also, theories that cannot be falsified, cannot be considered as scientific. Because, then they become facts. If you have theories like, the sun will rise in the east. If that is what it is, and you want to test this Theory. Now, this is going to happen, till the earth rotates. And so, in this kind of things, there is no point in making Hypothesis. Because then what happens is, this is not a scientific Theory to look at. And so, this is a fact that we know, and it becomes fact. And so, that is why, our Theory should be able to be, falsified. (Refer Slide Time: 25:25) Two other things, that we do in research, or we use in research, is the process of Induction and Deduction. Now, what is Induction? Specification, or an generalisation. So, if you have a large set of data, and from there, if we conclude specific results, this is called Deduction. But, if you have very specific results, and from that, we forecast, or we come up with varied kind of Hypothesis, or varied kind of results out of it, then that is called Induction. So, Induction is, from specific to general. The process of generalisation is called Induction. And, the process of specification, is the is the pro the process of reduction. So, if you are coming from specific, a large data set, from that, if we pinpoint, or if we are able to extract, specific hypotheses, or specific results, this is call Deduction, but from very specific results, if we inflated, if we are able to if we are trying to predict a number of solutions, out of very specific results, generalise it in a manner, then it is called the idea of Induction. So, what is Induction. Specific examples to general statement. So, if there is a specific example, let us say that, so like, I saw a parrot, eat a chilly. And, based on that, if I say that, all birds eat chillies, then this is basically an Induction. Because, what I am trying to do is, for the bird category I am trying to say that, most birds eat chilly. And, this is what I am doing is, I am generalising it. So, observation to theories. If you are from one particular observation, if you build a Theory, this is called the process of Induction. In Deduction, a general statement to specific examples. And so, if I say that, most birds are able to fly, and from there, if we reduce the statement that, ostrich can fly, whereas they cannot. Then, what I am doing is, I am mistaking, and I am using the process of Deduction. So, since most birds fly, and if that is a characteristic feature of a bird, from there you are defining feature of a bird, from there if I am trying to say, that ostriches can fly, which is not right, or Emu can fly, which is also a bird. Then, what I am doing is, I am doing a Deduction. In Induction, what happens is, we form Theory to Hypothesis. So, if we are coming from Theory to Hypothesis, we are doing Deduction. But, if we are taking observations, and from there, we are building a Theory, then we tend to do something called, Inductions. (Refer Slide Time: 28:01) So scientist, they make observations, to detect patterns, from which they propose a Theory. Then, they generate the Hypothesis, which are predictions about, future observations. So, first they collect, observe something. And, from that observation, they give a Theory. And then, they generate Hypothesis, based on the theories, and predict future observations. These new observations, call experiments, test Hypothesis and provide evidences, for or against the Theory. So, based on certain kind of data, I give certain kind of Theory. And then, based on the predictions of a Theory, I run experiments, and then validate the Theory. This is how a research, is exactly done. The process from patterns to Theory involves Induction, while the process from Theory to Hypothesis involves Deduction. As, you can see, this is my Theory this is my observations. So, if I start with observation, look at patterns, will the Theory end to Hypothesis and experiments. And, then again, come to observation. Is what I am doing is, Deduction. If I start this way, uh I do Induction. If I start the other way around, from Theory, if I go to Hypothesis, then observations, and patterns, again back to Theory, what I am doing is actually, Deduction. Are so, these are the two basic forms, or two basic types of doing research. (Refer Slide Time: 29:14) So, we will take, two experiments. And, I will explain, further on how, experiments are designed, in language, and in any behavioural sciences. We look at, the basic of experimentation in behavioural sciences. And, we will, for that purpose, we look at the investigation of short-term memory capacity, two classic examples, from the field of cognitive psychology, or memory, and will take these as our model systems, and explain to you, how research is done, are experiments are planned. So, the two experiments, that we are taking is, first, STM capacity, Short-term Memory Capacity, as number of items, Miller, 1956. So, Miller gave something called the magical number, which is 7, + or -, 2. These are the number of chunks of item, which is stored in short- term memories. So, this is the most number of items, that you can store, in your short-term memory, or working memory. That is what, the definition was. Not working memory, but short- term memory. So, this is what, they were testing, how many items can be stored, in short-term memory. The other that we are doing, the other experiment, that we are trying to look at is, Alan Baddeley's experiment, which says that, what is the length of time, for which items can get stored, in short- term memory. And remember, this is a very classy experiment. And, based on this experiment, the whole idea of working memory was developed. So, let us look at, these model experiments. The Hypothesis was, people can hold, 7 items in, ah, it should be, 7, + or -, 2. But, let us take, go with 7. 7 items in STM, this is what the Hypothesis was. The test was, in a digit span task. In a digit span task, what really happens is, number of digit appears to you, one by one. And, you will have to commit these digits to memory, and later on, reveal, or retrieve, those digits. So, that is what, a digit span ta task is. In front of you, you will have a computer, and so, number of digits will come. So, one digit, a two digit, a three digit, a four-digit sequence. And so, as the number of digits’ increases, what you have to do is, remember them, and retrieve them back. And so, what we want to see is, we want to see, how many digits can you actually remember, or can you hold in your memory. People can reliably repeat, 7 words, from this this test. It was founded that, 7 is the maximum number of digits, that anybody can hold, in short-term memory, or for a brief period of time. Now, we will not come to the time Hypothesis, but how many items, which are there. And, the interpretation of the result was, results support the Hypothesis. And, from this result, we were able to support the Hypothesis, that STM capacity is limited. The question was, whether STM can store, a number of items, infinite items, or if it cannot, then what is the actual capacity of short-term memory. And, from this experimentation, we were able to come up with an interpretation was, since 7 is the is the number of digits, that most people who took part, is the mean number of responses, or mean of digits, that people were comfortable in retrieving back, after doing the digit span task. So, this is what, the result was. Similarly, there was another experiment, with how long, can you hold the digits, in short-term memory. And, this is called, Baddeley's experiment. And so, here the Hypothesis was, people can hold about, 2 seconds of information in STM. That was what the, Hypothesis was. So, how long the, from based on research, previous researchers, it was believed that, 2 second is the time of information storage, in short-term memory. The test was, people can repeat about, 7 short words, only 2 or 3, only 2 or 3 long words. So, if we give very short words, then people can remember, or repeat 7 short words. And, but if it is longer words, with 5, 6, 7 letters in it, or 8 or 10 letters in it, then only 2 or 3 long words can be repeated. And, that was the test that, people were asked to do. And, based on that, the interpretation is, results falsify Miller’s Hypothesis, support Baddeley's Hypothesis. That, it is not. So, this is this is also an example of falsification. What Miller found out is that, irrespective of time, people were able to hold, 7, + or -, 2 items. And here, it was found out that, only 2 or 3 items, is what people were able to hold. And so, time was of essence, and not the capacity. So, short-term memory was not a capacity driven store. Which means that, it will not depend on capacity, whether it is a time-dependent store. So, more number of retaliation, the longer it was. So, STM was defined in terms of, how long can information be stored. It does not define, or it was not dependent on, how many words can be retrieved. (Refer Slide Time: 33:53) So, by taking these experiments, we will progress further, and explain to you, the whole process of doing research. So, what are the methods of science, or what are the basic technique, that we use in science. The first technique, that we use, is something called the, Naturalistic Observation. So, what is Naturalistic Observation? These are used for, this is the process of observing, and describing a phenomenon. So, if there is a phenomenon, if there is a particular experiment, or if there is a particular set of observation, which is out there. And, we cannot go in there, and do any manipulation, what we do is, we stay away, and look at, what is happening, and from that, we collect data. Let us say that, we want to see, how a certain tribe speaks, or what is the way in sae which, sentences are comprehended, sentences are produced, or what are the language of a particular tribe. Now, since we cannot go and meet with these tribes, and we cannot go and intervene with them, and we do not know their language, so basically, the best we can do is, we stay away somewhere, and observe, while a phenomenon, while they are exchanging their language. And, this is called, Naturalistic Observations. Since, we go to that particular tribe, and from a very distance to watch-on, how they are producing sentences, or producing whatever of, our nature of interest is, this is called, Naturalistic Observation. The goal here is, to describe. So, the main goal of naturalistic observation, is describing the particular phenomenon, or describing, how this happens. That is the best, naturalistic observation can do. Because, it collects a lot of data, and from there, it will form data sets, and describe. So, it is exploratory in nature. We are exploring. Nothing is out there. And so, we are still exploring. Then, we have something called, Correlational Methods. In Correlational Methods, what we do, is a mathematical technique, that seeks patterns in data. Now, let us say that, we have no variables, to start with. We do not know, what is happening. So, we want to study, how people in a particular tribe, how do they communicate with each other. With English language, or any other language, we have very set rules, a very set paradigms of language, we have very set evidences and experiments, well developed experiments, to study, how language is produced, comprehended, meaning extracted from it, information passed on, and all those things, are there for, languages, which are known to the world. But then, if we have a try, somewhere in Africa, or somewhere far off place, which has not come in close contact with the real world, they want to see, how language develops, or how people use language there. What we do is, we have nothing to start with. Because, the rules that we have, for the English language, or for any other language, may not apply there. So, what we do is, we first go in, and look at, when they are communicating. And, from there, collect as much data, or look at the phenomena, the most closely, that we can record that phenomena. This is called a, Split Research. Now, when we do that, from that number of data points that we have, the type of data that we have, from that, we will run a correlation. Correlation is, the basically a technique, in which we find commonalities between, various variables of interest. Now, what is a variable? Variable is something, which varies. So, the things of interest, the points of interest, we find commonalities between them. And, that is what is called, correlation. In correlation, what we do is, since we have so much data, as we have so much data points, what we do is, we run a correlation. Generally, we use something called Factor Analysis, in doing it. When run a correlation, we will find bunches of data, or sets of data points, which accumulate together, and form categories. And then, they are clubbed together. This is how, the factor analysis also works. So, they are clubbed together, to form the factors, of that particular dataset. So, in experimental research, we collect a lot of data, and then we run, correlation on them. And, based on the correlation, data points, which correlate highly, they will accumulate together, to form data sets, or to form clear-cut groups, within that, number of data points. And, these groups are what are called, the factors, or these are, what are the points of interest, or these are the variables of interest, for us. So, it is a mathematical technique, that seeks pattern in data. Also, basically then, refining our sentence, we find patterns. And though, the goal here is, prediction. Once, we are able to find patterns, these patterns will then predict, that this is the kind of language, that they are using, or this is the kind of rule, that they are using, for the language, that they have in this particular community. And so, that is what, the correlation is all about. And, once a prediction has been done, once we have these predictions, or once we have these variables of interest, or these patterns, we actually take these patterns, and then manipulate things in patterns, or within the pattern itself, we manipulate the IV, that is what we call in experimentation. But here, what we do is, we manipulate something, in in those patterns. So, we change those patterns by certain degrees, and see, how a particular behaviour of interest changes. And so, that is what, the experimental method is all about. So, this is, from here, we look at observations, we build a Theory. From this Theory, we produce a Hypothesis. And then, we test the Hypothesis. This indicates, experimentation method. So, means of systematically testing hypotheses in control situations. What we do is, now, we take that patterns. And, based on the patterns, we bring these tribes into the lab, and then manipulate those variables, or those data patterns. When we do that, a particular kind of response will get generated. So, if it is accuracy, we are looking at, if it is reaction time, we are looking at, or some other variables of interest. So, that is what, we look at. And, we test this Hypothesis, whatever Hypothesis we generate, out of correlation. And, this, the goal of this particular thing is called, explanation. So, Theory building, naturalistic observation, correlation, producing Hypothesis, experimental methods, or testing the Hypothesis. (Refer Slide Time: 39:59) Based on that, we produce, models, and theories. So, um what is a model? Based on the experimentations, we then generate data model. So, models are simple versions of phenomena, under the study. So, most theories, are expressed as, models. And so, which are attempt to explaining the underlying mechanism, typically in the form of a graph, a set of mathematical equations, or computer simulation. So basically, models are simplified versions of a particular phenomenon, which is under study. And, this happens, in terms of, either a graph, a set of equations, or a computer program. So, model could be explained, in terms of a graph. For example, Y = MX + C, is a form of a model, or Y = R 1 X 1 +, R 2 X 2 +, something, if it is a polynomial. So, it will be, R 1 + X 1 +, RX 2 2, X 3, something, something, and then an error. So, this is basically a model. And, this model basically explains, the how we manipulate this, X 1, X 2, X 3, to generate Y. That is what, the model explaining. And, this is called, a Linear model. Similarly, we can have Non-linear models. So basically, a model could be a graph. A simple graph is a model. Or, we can have a set of equations like this. Or, we could have a, sometime we have a computer programs also. For example, ACTR, or ACT*, or SOAR, all of these are computer models of cognitive processes, or how cognitions really work. So, these kind of models are generated. What is model? It is a very simple, or the very basic phenomena, is of a particular thing, that we are studying, or a particular question, which we are investigating. Now, computer models. What are computer models. They mimic behaviour under the particular study. So, computer models are, what they will do is, they will have, they will have external inputs, or they will have, basically some kind of an input, and this will generate an output. So, it is basically, computer models are those kind of models, which takes in a number of inputs, and based on that, build the whole Theory out of it. So, these are called, computer simulations. And, they help overcome, unwarranted assumptions, flaws in logic. So, why we do him a computer models. We make computer models because, computer models are very effective and very cost very logical in sense. So, when you make, instead of testing a model, or instead of testing a particular prediction of a Theory, in the actual subject, because once we do that, and if it fails, then maybe, the study does not respond. So, what we do is, we simulate the situation in a computer, the same kind of phenomena, the same kind of results that were expecting, in a computer. And, we gather, results from it. And, from the results we would see, or we try to assess, what is the problem, that we are facing, what is the kind of logic, that we are using, where is the computer failing, and it will produce results. So, un till the point of time, it produces the results, that we want, or it gives some kind of a result, we never actually go ahead, and test the model, on the actual data points. So, computer models help us in, unwarranted assumptions. And, when we are making false assumptions, we will get something. You know, computer models, we can verify, we can re- done the model. For example, look at ACT-R. The way, the procedural memory, and the collective memory is described, in ACT-R indigenes concept, in terms of the spatial, or the special code, and the prepositional code, the way it is described in ACT-R, this basically is a good explanation of, how a computer model really works, or how they can solve problems, in the real world. And, also flaws in logic can be pointed out by computer programs. Now, models and Theory. Good models, lend plausibility to the Theory. If we have a good model, it will always build up a Theory. And, only a data can support or falsify a Theory. So, once we have a Theory, we then go collect data, based on what the Theory is saying. And then, we either falsify the Theory, or we rebuild the Theory, or if it will never be true in any sense. So, we overcome, try to overcome assumptions, or we try to overcome the limitations of a Theory, and build up a good Theory, for any kind of data. (Refer Slide Time: 44:02) Constructs, another interesting thing, in in a research. So, what is a construct? A construct provides, a scientist, with a useful way of thinking, about the world. And, so basically, what a construct gives is, it gives an outline, or it gives a way in which, scientist thinks about external, or the outside world. So, constructs the label, given to a set of observation, that seem to be related. So, concerts are basically, formats in which, scientists are able to, conceptualise, or think about the external world, or think about phenomena’s in the external world. So, memory, attention, intelligence, personality, even language, are constructs. So, these are phenomena, which are there. And, when these phenomena, are made in a process, understood a certain process. That is what, a construct is all about. Another interesting thing, or in line with construct, or something called, operational definitions. So, what is operational definition? And, this is very important, at least in social sciences. Why it is important is, that let us say, I study a phenomena. Now, when I study a particular phenomenon, let us say, I am studying happiness. Now, when I am studying happiness, it may be possible, that there are certain other researchers, which are studying happiness. And so, when the result that I gave, I have to say that, this is the way, I operationalise my definition of happiness. Let us say, when I say happiness, I say my happiness is bound by, A, B, C, D characteristics. Right. So, happiness is, when you will have certain kind of elevated feeling, high arousal, certain kind of body structures, or facial effect, and so on and so forth. Certain other researchers, would also be doing. And, they would say that, A, B, C, D may not be the primary way in which, happiness is defined. And, E, F, G is the primary way. And so, when I operationalise the definition, I say that, my way of defining happiness, is based on A, B, C, D, and not de E, F, G. And, that that is why, my theory of happiness, or my experiments in happiness, results from experiment, may not fit, in some other experiment. Then, this is what is, operationalising a definition. So, it defines the construct, in terms of, how it is measured. So, I say, my happiness is measured, in terms of A, B, C, D. And, so some other people may say that, it is measured in E, F, G. And so, that is a difference, that can happen. For example, let us look at, intelligence. So, intelligence is based on, score on test. And, this is what is, operationalising. My intelligence is based on, scores of a test. Some people would believe that, intelligence is not based on scores of test, it may based on, something else, and so on and so forth. So, defining intelligence is, IQ, or scores on a particular test, is one way of operationalising the definition of intelligence. Also, short term memory capacity is measured, in terms of digit span, or it can be also measured, in terms of time. And so, these two definitions, or these two ways of measuring, my short-term memory, is what is the operationalising the definition. (Refer Slide Time: 47:11) Now, another important point, that we need to know, in doing research in language, or other social sciences, is the concept of something called, validity and reliability. And so, what are these? What is validity? The degree to which, instrument measures, what it is claimed to measure. Now, let us say that, I develop a test. I develop a test of intelligence. And, when I give it to people, that test does not measure intelligence. It could be that, I have questions, which are not related to intelligence at all. So, I develop a test of academic intelligence, which does not have questions based on, let us say, reasoning, or verbal reasoning, and that kind of a thing. Now, in that case, my test is said to be, not valid. So, if my tests of intelligence, academic intelligence, does not have questions, or does not have questions related to, academy, academics, or intelligence in academics, then it is said to be, an invalid scale. And so, how do we measure validity? What we do is, when I have a test of intelligence, I take my new test, and run it against a test, which is already there in the market, which is supposed, or which claim, to measure intelligence, or academic intelligence. And, when the scores are correlated, and we find agreement between, the two course, the two test, that are there, the one, which is pre-existing, and the one, that I have developed. I say that, my test is having a validity. Because then, what it is doing is, it is measuring the same thing, that the other testers bring. And the other test is a, well-established test of intelligence, academic intelligence. And so, if the scores are in range, or basically are correlated highly, or in some way are significantly correlated with each other, I say, my test is valid. Now, for example, let us look at, bathrooms scale. Now, if Bathroom scale is not able to measure, it is valid for measuring weight. So, if in a bathroom scale, I will stand in it and it does not give me weight, then what is the point of that scale. So, it is valid for measuring weight. And so, in a bathroom scale, when I stand, it only measures it claims, and not measuring intelligence. So, when I stand on a scale, and it says a weight of 80, this 80 is the weight, is the body weight. Now, let us say that, what it is giving, in my IQ, then it is a wrong assumption. Because, bathroom scales, the weight has no correlation with IQ. And, what the test bathroom scale is doing, is measuring your IQ. And, in this case, this test is, or this particular thing is, not valid. Another interesting thing is called the, reliability. Now, whatever we are focusing on today, should be a measure of your designing experiments in, learning, or any behavioural sciences. So, whether it is designing the Hypothesis, or whether it is using a Theory, how do you use a Theory, how experimentations are done. And, towards the end of this section, I will tell you about, how experiments are done, what is an experimental design, and those kind of a thing. So, and then, we should, whenever you are developing a test or experiment, we should be, always be aware of, what is the validity and reliability of, the scale or the experiment, that we are doing. Now, what is reliability? Again, the validity is basically, what my test is claiming to measure, and what is measuring. And, reliability is the degree to which, instrument gets consistent measurement, for the same thing. Now, let us say, I have a bathroom scale. And, I stand on it, it gives my weight as 80. Next time, I so, this is my first reading. If I go to the second reading, it says 85. Then, I go here, and then it says, 90. Then I go here, it says 70. Then, I go here, again it says, 80, then 85. What is happening is, my test is not reliable, or my scale is not reliable, or not been calibrated properly. What is calibration? It is the process of sensitising a particular instrument. And so, that is what, it is. And so, this is not a, it is a non-sensitive scale. And, nobody would like a scale, like that. So, when an instrument gives, consistent measure. Now, the range of error could be, plus or minus 2 kgs. So, if it is in the range of 78 to 82, that my weight is, with the same machine always, then I will say, it is a reliable scale. But suppose, if it keeps on throwing, 70, 60, then 90, then whatever it wants every day, then it is not reliable. So, basically, maintaining consistency. If I do an experiment, and that experiment, if I am repeating it again, if it is giving some weird results, and it keeps on giving, different, different results, or different, different replications of the same experiment, then it does not have a reliability So, reliability is basically, acquiring the, more or less same results, over multiple repetitions. So, daily measurements of a bathroom scale. If it is, 143, 285, 37, 196, it is not a reliable scale. But, if it is 157, 155, 156 and 158, is a reliable scale. Now, this happens, because of sensitivity. And so, sensitivity of scales would vary. And so, it will give you, plus or minus 5% error. That is what, the error range is. But, if it is more than that, then it is a non-reliable scale. (Refer Slide Time: 52:18) Now, before doing an experiment, we have to come up with, something called a, experimental design. So, what is an experimental design. And, before that, we have to know about, what are experiments. So, what are experiments? Experiments are, tightly controlled situation, designed to test a Hypothesis. So, what are experiments? Experiments are, way of controlling, a number of variables, a number of factors, that are extraneous. Now, what is extraneous factors? Extraneous factors are those factors, which may harm, or which may interfere, with the result, if we do not control it. Let us say, we are measuring the speed of, reading of a person. Now, the speed of reading of person, may be affected by intelligence, may be affected by a number of books that he has read, may be affected by digit span, may be affected by so many things. So, if you want a true test of reading span, of how quickly somebody reads, or how quickly the eye reads, and the brain is able to mind the Hypothesis, and how the mind is able to make meaning out of it, we need to neutralise these factors of intelligence, digit span, previous experiences and all, and so, we control for those variables. So, what is experiment? Experiment is, we control for all variables of interest, or variables of interest, non-interest, which may interfere, with the results of our experiment. And, we control those, and then manipulate. Only one variable. So, we manipulate the variable, that is in a reading span, or in a reading span test, what we do is, how quickly, you can read a variable. So, we will give different, let us say, a different presentation times. So, you will present the same sentence, for different presentation time, and we will see, how quickly, you can read a particular variable. And so, this is, if this is of interest, that how quickly, can you read a particular sentence, and then generate meaning out of it, what we want is, or what we can do is, we can value the presentation time. And then, may be, with control for intelligence, agent, those kind of thing. I raised develops the more, repeat you develop the more, chances that you will be able to rightly predict a particular sentence. And so, we control for that. And then, what we do is, we present these the sentences, in different, different speed, to the same intelligence people, or balanced intelligence people, and they were able to generate, a particular Hypothesis out of it. So basically, it is a tightly controlled situation, designed to test a particular Hypothesis. It is a comparison between two groups, that are treated differently. So, basically then, I will have a group in which, so first I will control all variables of no interest, or we call them, extraneous variable in research. And then, what we will do is, we have two groups. The experimental group will see, variations in speed of presentation, of the sentences. And, the control group will have, one standard speed. And then, later on, we will see, how accurately, are you able to produce the sentence back, or able to comprehend the gist, whatever your question of interest is. And, my experimental group in the group, in which, you are making the manipulation. And, control group is the group in which, you are not making manipulations. So, we are controlling between two groups, that are treated differently. So, in this group, we are making the manipulations. In this group, we are not making the manipulations. And so, that is that is how, we do. Again, we look at, Baddeley and Bransford study. And, we will see, how these things work. So, basically then, experimental design, Hypothesis are derived from a Theory, by the logical process of Deduction. And, experiment is then designed, to test this Hypothesis. These are the process of, regenerate an experiment. Now, experiments compare, the performance of different groups. The experimental group is given, a different treatment, to test the Hypothesis. The control group goes, without the treatment, in order to provide, a baseline of the comparison. So, what is an experiment? Experiment can be viewed as, a stimulus response test. And, we will we will look at, IV’s and DV’s for this. So basically, this is how an experiment is. And so, what we are trying to do, is we are trying to take, to basic classic studies, and we will talk about, how these experiments were done, and what are the variables, in what how these experiments, what are the extraneous variables, interest variables, and what are the factors, in doing an experiment. So, one is, we have tad Baddeley, 1975 study. This is, the Hypothesis here was, STM capacity limited by length, not number, of words. So, how many words are there, that is not of interest. How much length of the word is? And, the method is, group A repeats short-term words, group B repeats long-term words. As I said, basically, what we did was, if this is what my thing is, I create two groups. My first group looks as, short term words. See if it is, the question is, whether word length, not number of word, is the reason, or is what determines, STM capacity. I create two groups. Better, it would be to create, three groups. In one group, I have short length words. In the other group, I have long length words. And, in the third I will have, intermediate words. So, basically then, I can have three groups, or I can have two groups only, since we are focusing, on Baddeley study. So, you had one group, looking at short words, may be 2 plus or minus 1 word. And then, 7, + or -, 2 word, kind of a thing. And then, that is what, how the experiment was done. That is the method. In, Bransford and Johnson, 1972 study, the Hypothesis was, context aids ambiguous story comprehension. But, the question here was, whether context, let us say that, I am talking about, I am randomly talking about, something. So, I am telling you that, after this, I am telling you a story. So, after this, you take some powder, put it into the machine, put things onto it, and then it, it goes around and around, takes in water, and something, something. Until I show you the photo of a washing machine, you will not be able to comprehend that, I am talking about the washing machine. And so, that is what, we are trying to see, that whether context, whether picture of something, can lead to ambiguous story comprehension. So basically, can the photo of it, or when some kind of context, on which the story is being said, that context help us in, in making meaning out of ambiguous sentences. The method here was, group A sees the picture, hears the story; group B hears the story, no picture. So, the group, when they see a picture, they immediately find out that, this is what the washing machine is. Or, they see a washing machine, and then the hear the story, so they are able to narrate the fact that, this whatever description is coming off, is coming off about the particular washing, or the washing process. So, when group B hears the story, the random story, and does not see the washing machine, they have no idea that, we are talking about the washing machine. (Refer Slide Time: 59:30) Now, for any experiment to proceed, most experiments have, two type of groups. One is called, the experimental group. And, the other is called, the control group. Let us say, I want to do an experiment, any experiment, which is there. And, when I want to do an experiment, the first thing is that, I have to define the variables of interest. Let us say, the variable of interest is, whether speed of sentence presentation, influences. So, speed of sentence presentation, influences comprehension. If that is what, mi my question is. So, speed of sentence presentation is the IV because, that is what, we will ma measure. And, sentence comprehension, which is, how nicely you can read a sentence, or how nicely you can retrieve a sentence back, get the gist of the sentences, this is called my DV. So, IV is the speed of sentence presentation, and DV is the sentence comprehension. And then, what we do is, we can we create two groups. Experimental group is the group in which, I will vary the speed of presentations. So, I will take a sentence, and present it for, let us say, 1 minute, 2 minutes, three minutes, or let us say, 100 th of a mini 1000 of a millisecond, 100 of a millisecond, 500 of a millisecond, 1000 millisecond, 2000 millisecond, and so on and so forth. In the control group, I do not manipulate my IV. So, I have a fixed IV, which means that, 1000 milliseconds is the time in which, the sentence is being presented. And then I, measure the sentence comprehension, in terms of, how accurately, can you reproduce the sentence back. And, that is what is called, the DV. So, what is the experimental condition? Groups, that is given treatment, to test the Hypothesis, that is called the experimental group. And, control group, or control condition is group, that is not given the treatments. So, in this case, here is the, this provides baseline for comparison. Because, this will tell me, whether these speeds have, any effect or not. (Refer Slide Time: 01:01:46) Again, looking at Baddeley and others, 1975 study, the experimental group was, repeating long words, test Hypothesis. And, the control group are, repeating short words, replicate digit span desk. The digit span task, in which we found out that, it was 2 seconds in which, people were able to comprehend that. In the Bransford Johnson study, 1972, the experimental group was, sees picture, hears stories, test Hypothesis. And, control group was, they hear the story, and no picture. As I said, the control group, would not see the picture, but hears the story. But, my experimental group, would hear the story, as well as, see the picture. And so, this is what, the difference between my, control group, and experimental group, is all about. (Refer Slide Time: 01:02:25) Now, let us also look at, since we have taken up the idea of, what an IV and DV is. Now, first of all, what is a variable? So, any equation in this form, any function in this form, is called a variable. So, this X and Y are variable. What it is saying is, function of X is Y. So, when I replace, when I put values of, when I manipulate values of X, I will get values of Y. And, this is what a variable is. So, variable is anything, that keeps on changing. And so, independent variable is the variable, that we manipulate, that we change. So, in the speed of presentation test, the speed at which, we present the sentences, is what is called the, IV. And, DV is the gist, the comprehension, the retrieval of sentence, that you are doing, which is called the DV. So, my independent variable is, various types of treatment given to, different groups in experiment. And, my dependent variable is, measurement of responses of, each participant makes to, that particular treatment. (Refer Slide Time: 01:03:25) In my Baddeley experiment, the IV was, short words or long words. Then, DV were, number of words correctly recalled. So, this is called accuracy. And, these are the manipulation of, so word length, is what the manipulation is. In my Bransford and Johnson study, the picture and no picture. So basically, it is called, picture stimuli. Since, both the groups, heard the story, the only thing that we are manipulating is the, picture versus non-picture. And, so here, picture versus non-picture, is my IV, and DV is, subjective rating of difficulty, number of items recalled. Again, it is accuracy. How correctly, how many words, can you correctly recall. (Refer Slide Time: 01:04:00) And, let us lastly look at something called, Between-subject, and Within-subject designs. Then, we will look at Hypothesis testing, may be in the next class. So, basically then, there are two type of designs, that we use. So, sometimes, what happens is, it is very difficult to control, for subject factors. Let us say that, there are certain, there are certain kind of experiment, which require the same subject, to be repeated. Because, certain subject factors, may be responsible, for the different results, that we are getting. And, in those cases, we use something called a, Within-subject design. So, if the same subject, is being repeated in both, in the control group, as well as the experimental group, this is called a Within-subject design. But, if the different subjects, come in to the experimental group and the control group, this is called a Between-subject designs. So, what is a Between-subject design? Assigns each participants, to only one condition. Whereas, the Within-subject design, assigns each participant, to every condition. Right. It may be possible that, there is one factor, that we are interested in. And, that factor may be, may become different for, different people. And so, what we do is, to preserve the sanity of it, we repeat the same subject, across different conditions, different testing. And, this is called the, Between-subject design. So, let us again look at, Baddeley’s experiment. (Refer Slide Time: 01:05:20) So, in Baddeley’s experiment, each participant repeats, both short words and long words, on separate trials. And, this is a, Within-subject design. And then, Bransford and Johnson, some participants see picture, some do not, and hear the same story. So, they use a, Within-subject design. Now, Hypothesis testing is something, that we will do, in the next class, and look at further. So, let us quickly, then briefly go over, what we did in today's class. So, other than going back, and telling you about, what we did in the first class, in reviewing with the first class. In the present class, what we did was, we looked at, how experiment, or design, or how experiments are done. What are the various factors, or what are the various constituents of a particular scientific research experiment? We looked at, the conditions under which, an experiment is done. We looked at, the mechanism of doing an experiment. We look at, the process of Induction in Induction. And how what are theories. What is Hypothesis. What is an experimental design? What types of experimental designs are there? And, we took some model systems, we took some model experiments, and looked at, how these experience fit into the conceptualisation of experimentation, that we have been talking about. In the next class onwards, we will look at, how Hypothesis testing is actually done. And, we will further look at, we will take some examples from language, and see, how to construct an experiment. And, look at some experiments, which have been done in the language, and discuss those experiments in detail. So, in all, we will tell you, I will tell you within these two lectures, how research is done in, in language, and specifically in language, and generally in any behavioural sciences. So, until we meet again, in the next class, it is goodbye from here, thank you.

Use Quizgecko on...
Browser
Browser