subtitle (8).txt
Document Details

Uploaded by AdventurousWildflowerMeadow
Buckinghamshire New University
Full Transcript
My name is Diane Sieber. I’m an Associate Professor in the Herbst Program for Engineering, Ethics & Society, and that’s located in the College of Engineering and Applied Science at the [inaudible] campus. I have in the past been the Associate Dean for the College for Education. I co-founded ATLA...
My name is Diane Sieber. I’m an Associate Professor in the Herbst Program for Engineering, Ethics & Society, and that’s located in the College of Engineering and Applied Science at the [inaudible] campus. I have in the past been the Associate Dean for the College for Education. I co-founded ATLAS and ran it for many years. That’s the Alliance for Technology Learning and Society, and fundamentally got into technology from a very traditional, literary, historical background when I was in the Spanish Department, and that was quite a while ago. I realized first that programming is language, and I love languages, in fact, that’s what I did, and also that I probably should have been a computer scientist all along. I transferred my tenure to engineering and have been wonderfully happy here ever since. Part of what our program does is look at, as we educate the creators of future technologies, looking at the ethics of technologies, but particularly looking at strategies for thinking through the ethics of technologies. Really in the trenches, students who are hired as engineers and have to start programming immediately and they have decisions to make that their superiors have no idea that those are ethical decisions. I think that’s where all of my interests intersect. I’m also doing some research with computer science colleagues on AI art and particularly K-12 art education. Yeah, I teach. The reason I started getting involved with generative AI was a computer science colleague who gave me a preview before the launch in November 2022 of ChatGPT and my absolute catatonia over it for a weekend as I thought through what this was going to do. It’s pretty rare to be able to say I’ve seen something that’s going to change everything. I really had, and the thing I immediately thought, because I’m pretty pro technology, was not this is going to ruin everything. It was, this might finally be the answer to a problem I’ve had for at least 10 years. That is that I’ve been teaching periodically a first-year writing class in the Herbst Program. Over the past 10 years, I have noticed a deterioration in writing to the extent that I was getting very bad first drafts for final paper submissions that took so much time to unravel that I despaired of teaching writing in the way I had always taught writing. That is, helping each student figure out their unique voice and their individual style, and helping them to develop their strengths instead of writing a generic anything. The first thing I noticed was it immediately leveled the playing field for students who hadn’t been well educated in writing and particularly, it allowed students who couldn’t even get a word on a page or who struggled for five days to try to write a paragraph, to get something terrible on paper and start working with it. I think of this like surfing, you have to break through the breaking waves to get out there to catch something interesting and meaningful, and they couldn’t break through the breakers in order to write something that they could then work with. I immediately saw this myself as not something that would cause students to circumvent learning, but in fact it would cause them to learn better. Can you share some stories of just how you’ve used it through your design process? I will confess to being somewhat of a traditionalist [inaudible] structuring a syllabus. My standard syllabus is about 16 pages. It tells a story, and actually often it’s a one-page cartoon map also, so that students can see the full landscape of where they’re going. Generative AI is not as useful to me there. In terms of writing a syllabus and writing assignments, I find it quite useful in taking my very serious description of something and changing the tone slightly to be lighter and more encouraging. I certainly use it to help craft emails to students when I’m sending something out to everyone to make it more friendly and welcoming and less business to the point, which is how I tend to communicate. I find that the tone shift is actually the place where I find it most useful. But what I’m also finding is that I’ve had to rethink everything about how I structure particularly a writing course in order to help students gradually level up to where they see ChatGPT, for example, as a co-writer, as a co-author as opposed to write this instead of me. I started with requiring it for my classes. Since halfway through fall of ’22, it was required for my students, and obviously, only the free version is required. What I started to do was simply find out how when they just sit down with it, what are they doing and that has helped me figure out how to design a course because this is an open-world cyber punk, I’m going to poke at things and do whatever I can to see what works it turns out as a wonderful way to learn in general. I’m a gamer, so the best way to learn was through games and to some degree, watching them poke around has helped me figure out how to incrementally introduce more complex prompts, more complex ways of interacting with generative AI and also to show that there are so many different levels of interaction from I just don’t want to touch it because I’m me to Centaur interaction where I divide up my responsibilities and some things belong to the AI and some are me to cyborg use where everything is fully integrated, your workflow is integrated, that you interact with AI on everything. I think that has changed the framing of a course because if our fundamental goal is to teach students not content but how to learn, then if they’re going to be learning from AIs with AIs and through AIs and using AIs, I think we really have to get serious about how we integrate that into our curricula in our design. Completely agree. I love how you framed those different levels of use and integration of the cyborg the Centaur. I’m just me, I merely said Luddite, but that’s not what it is. I’m a unique voice and I’m not using anything. I think that’s obviously quite a valid approach. My concern is that we’ve already got a number of research studies showing that you have to be able to use LLMs Generative AIs in the workplace. It is an increasingly required skill to have. Thinking back to when you first introduced it in late 2022, what were your students, what was their thoughts, their reception to it? Was it positive? Was it negative? Was there a lot of skepticism? I think actually it was more fear. Even when you explain how the technology works, their sense that this was just a giant predictive engine like autocomplete on your phone, except way more sophisticated and it took in millions and millions of words. I think there was a bit demoralizing because the initial reaction was, well, I’ve spent all of these years trying to learn how to write a five-paragraph essay in high school, I’ve been struggling in college, and wait now none of that knowledge is useful. Until they started playing with it, this was the first GPT that was released, so it wasn’t very good. It’s a lot more sophisticated now. But just by playing with it, they began to see particularly the discrepancy between having a voice and not having a voice. You can be formally correct and still not say anything. You can be formally and grammatically perfect and yet be boring. You can string together three paragraphs that seem on the surface like they make perfect sense and yet there’s no through flow of argument and there’s no point of view. To some degree, it helped tremendously right away trying to explain to students what’s missing in not good writing. They were so focused on grammar and on sentence variation and all of these things they’ve been told they had to master, that they really hadn’t had time to think about argument or voice. They had thought a lot more about persuasion, and I think they did find it useful to see how persuasive generative AI can be, but also, I think it was shocking, initially to see how prolifically it lies, how confidently it gives you exactly the wrong answer. Over the course of two semesters, three semesters now, that has changed. It’s far more accurate, it’s far less likely to hallucinate, and therefore, it’s more convincing even. I think it’s a wonderful opportunity for them to start thinking about what does an argument look like. Students now are saying, actually, this helps me understand what’s missing in a political speech. What’s not being said as opposed to what is being said. What’s the logical, rational argument as opposed to what’s the emotion? Of course, if you were to take a political speech and feed the transcript into a Gen AI, which we’ve done, and have it spit out the same argument, you can say adjust the emotion, adjust the [inaudible] make it more angry, make it less angry. You can start to see how rhetoric works. It would be way too time-consuming to generate all of these different modes of expression in a one-semester class. What they’ve begun to see is that play is great. I think they’re also feeling threatened by a fair number of faculty who have simply adopted no use of AI whatsoever policies in their classes. I have a student who on the first day of class this week was told everything will be turned into turnitin.com and if it’s over 50% possible of AI generated, you’ll fail. This is a faculty member who should know that none of these detectors work. It’s wildly inaccurate, and because generative AI is emergent, it’s always original. Even if it’s gleaned from many other things, a detector doesn’t work. I think overall to me, the biggest shock and benefit of this is that I have to rethink assessment. Why am I even teaching writing? Because writing is a way to think, it’s a way to clarify your own thoughts for yourself. Therefore, obviously one has to assess the process because assessing the product simply incentivizes everyone to use an AI. Yeah. I think that shift has really started to challenge a lot of course designers and instructors is that the end product a lot of times it’s not as important as that process and that learning that goes on in between. But the thing that’s amazing about this is it never was. It just clarifies it. This is one of those moments of crystal clarity where you say, there’s a far-side cartoon from many years ago, of course, that really represents this to me. It’s a bunch of cows in a field and they’re looking straight at the frame and they’re saying grass, we’ve been eating grass. They’re so shocked they’ve been eating grass all this time. To me, this has provided a lot of those moments about how I design a course, what I’m assessing for, what I actually want my students to leave a course with. While I don’t use generative AI to write my course outcomes, generative AI has caused me to completely rewrite my outcomes. You mentioned you’ve had to rethink some of your assignments. I was wondering if you could just touch a little bit of how you’ve thought about restructuring those and rethinking those outcomes and things like that. Sure. I think I had already decided before Gen AI appeared on the scene that I was going to have to figure out how to teach writing differently. I literally went back to write one good sentence and was starting writing classes, not with, every week you’re going to write me an essay, with write me one good sentence for every class, and let’s talk about why it’s a good sentence. By working our way from one good sentence to one good paragraph, and only then allowing people to start connecting paragraphs. It has mattered a lot to be able to look at what GAI will do. Write one good sentence on your own then try to write one good sentence. Take that, feed it into ChatGPT and ask it to improve it. What does it think one good sentence looks like? Starting at that level rather than talking grammar, which obviously ChatGPT will fix for you or spelling or any of those things but instead being able to start at the basics and then having them apply this to ChatGPT. For example once you know what a good sentence is and you can identify it in every one of our readings in other students work, then you can start looking at a sentence that looks completely feasible in ChatGPT and say, wait, that’s not actually a good sentence. It is formally correct and not compelling so I think that has helped a lot. It’s also helped to confirm to students why it is that you work with writing in multiple stages. We’re using ChatGPT, that’s why I keep mentioning it, but really I’m talking any LLM, any generative AI program. I have them for the ideating process to start. They have to come up with ideas for what they want to write about but then they can work with an LLM to generate ideas of course. They can prompt an LLM to ask them questions, every time I give you something, ask me a question about it or every time I ask you a question, tell me a better way to ask that question, there are all sorts of ways to start off developing a relationship with a writing assistant in a sense. The second is prompting. They had no idea how to prompt. Fundamentally, everyone started using it as if it were Google, which is really funny. Just figuring out that if you tell it act as if you were a computer science professor and respond to my comments on the dangers of generative AI. Or the persona command of talk to me as if I were a five year old, talk to me as if a high schooler talk to me as if I were a PhD student about this topic. Just being able to figure out how to prompt for better output. Obviously, terrible input equals terrible output. Then also the analyzing stage, whatever it generates then you get a chance to actually analyze it for meaning, for flow, for argument, for accuracy. These are things they haven’t been doing with their own papers. They’ve been writing in a linear fashion and turning it in and now that they have an AI which is external, they have a chance to look at it like an editor and if they can learn to edit an AI, they can learn to edit themselves. The next stage for me is co refining so you interact with an AI for feedback, for formal stylistic improvement, also for shame free first draft parsing. One of the hardest things to show somebody else you’re writing and I don’t care what level you are, it’s still hard for me. But that a student could take a piece of work that they worked on, turn it in to ChatGPT and say consider the following qualities of this writing and give me feedback on it, what needs to be improved? Then being able to go down the list and say, wait, okay, I didn’t think of that of course I need to do that. That’s stupid. But having a relationship before you ever even turn something in to a faculty member. I do paper conferences with students on completed papers but that’s completed papers. This gives them a chance to have a conference that is no fault, no grade, no shame, before they ever come into the presence of an authority figure. Then finally, citing, figuring out how to cite all of this stuff. I invented a citation method based on the MLA in fall of 22. We’re still using it because nobody has really generated a better substitute at this point. But you have to be able to properly acknowledge the contributions of any co author, whether it’s a research essay or with a bunch of other scientists, or it’s something that you wrote with an AI, citing it as a collaborative partner, I think is a legitimate exercise and attaching an appendix to explain here are the prompts this is what happened. These are the parts that I know I wrote. These are the parts that it wrote. I’m concerned about students coming out of this worrying that they didn’t actually do the work, that they can’t own it, that they can’t feel proud of it. Having them document what they did and what it did, I think is pretty important for the psyche. It’s what you need for to live a good life is to feel you matter and you have impact and you’ve generated something significant. Sometimes it can take a lot of work to get ChatGPT to do what you want it to do and get it to a state. It takes a lot of knowledge to just steer it that way and knowing what you want and how to ask it. Also knowing that you are legitimately capable of not accepting what it is generated or changing it completely, or saying try again. Hit the try again button. Because the other thing I saw early was students who were paralyzed because they would generate a three page essay and then they’d do another one, and another one and another one and they’d have them all in their screen together. They say, well, I can’t even tell which one of these is better. I don’t know which one is worse. I can’t figure out even how to judge these and that’s where we’re headed in the workplace. That’s a completely different skill than we’ve really been teaching them. As your students really engage with ChatGPT, it sounds it’s fully integrated pretty much in every stage of their writing. I was wondering if you had any policies or how do you approach that subject in the beginning of a semester of what constitutes cheating for you and what you’re looking for in terms of citing and documenting? I do. I have a handy seven page guide for my students to get started. Fundamentally, it says you’re required in this class to use ChatGPT, free version. You’re welcome to pay for a different version but the free version is quite sufficient. Just be aware of the fact that at peak times you may not have access, so don’t leave it to the last minute. Then ChatGPT is a valuable, expressive tool alongside many other tools and it’s important for you to learn how to use it wisely and effectively. I basically tell the story of Garry Kasparov in Deep Blue. I’m all about telling stories. In 1997 when Kasparov grandmaster chess was defeated by Deep Blue, he went into a six month depression thought chess is over, I’m no good and came out of it, inventing a new form of chess. That’s why I say Centaur. Centaur Chess is now a thing. There’s a league where the best of human emotion ability to psyche out your opponent, ability to read a room that shock creativity is combined with a machine that can do things that humans can’t do. It produces a form of chess that nobody’s ever seen before and that’s very exciting. To me that’s the metaphor for what we’re trying to do. That yes, you can write on your own and yes, ChatGPT could write for you. But the combination of those two things generates something that is so much better than either on its own can generate. For that reason, consider ChatGPT a co author and cite it as a co author. I’ve had students come to me and say, I’m really concerned because I turned in something and I cited ChatGPT and my professor was really upset. I’ve had conversations with other people’s professors about this because we can’t draw a line. There isn’t a line now, and any line we try to draw is artificial. Any student who goes out into the actual world is going to find that the Harvard Business Review study that came out in September, I believe I have these numbers correct, showed that people who are using ChatGPT in the workplace for their work were some 15% more speedy at their production, but almost 40% more accurate and higher quality. There’s no way that the workload is going to accommodate people sitting and writing the way we’ve traditionally written. Anyone who wants to thrive in the workplace is going to have to know how to do this. It’s been equated on some level to the problem math classes had when calculators happen. But this is actually quite different because if writing is thinking, if writing is the process of translating your inner self for other people. Then you’re using an external medium to help you with that translation. You’re not solving math problems, you’re communicating self and that has to do with identity. On that note about developing that voice and that style and identity. Do you find or have any concerns that Gen AI could emphasize one form of writing over another and reduces any voice style in any capacity or do you really see it as something that allows more flexibility and more experimentation there? Well, so obviously GAIs are not all the same, and I have no doubt that there are GAIs that will do exactly that. Already there have been some studies showing that different GAIs have different political leanings. We know that to be true. But I think that’s why prompting is so important, being able to express. It’s actually, it’s so much harder. This is a programming problem. It’s trying to figure out how do you break down what you want to have happen into the component parts that you need for that to happen? To some degree, it’s a process of learning how to clarify what you want. Again, it’s very possible to be lazy about that, except when you’re working with a generative AI, in which case you get an immediate result. That’s just wrong. Just as in programming, you know immediately if you’ve communicated incorrectly or insufficiently. My concern would be that students who don’t know how to prompt appropriately, effectively will in fact be at the mercy of whatever the algorithm is. I think because my classes are ethics classes, at the medium is the message. We get to talk about the IP issues, the copyright issues, the privacy issues. All of the things that generative AI is now at the center of are part of the content of a course. I’m doing upper division, mostly fifth year senior engineers course this spring called Future Ethics. We’re using generative AI imaging to put together what we’re calling provoca types. They’re looking at a technology and trying to figure out what are all of the possible unintended consequences and contingencies that might affect this. What are the affordances and the dangers? But they can now generate an image that will help crystallize that conversation. An example would be, so our technology, electric vehicle charging stations are ubiquitous. They can generate an image of a charging station, but they can also say specifically, there are only three plugs. There’s a card reader. Now at peak times, who gets priority? First come, first serve. Do different cards carry different privileges? Does this cause social stratification? How does this reflect the values of our society? All of that just from generating an image that then people can look at and start asking questions about. Thinking about prompting and you just talk a little bit about the importance of prompting and how to drive the conversation that you have with the generative technology there, whether it be image generation or text generation. Where do you see, or when should students start learning how to do that? Well, I do have thoughts about that, but of course, with the caveat, I’m higher ed and I have never taught at other levels before. What I say needs to be qualified in that way. I certainly feel that every student, as a first year student hitting the ground in college, should have a small class in which they work iteratively with AI. I feel this is such an important workforce skill. Now, if AI can be deployed in high schools to the same effectiveness, then I think that’s good too because students learn habits pretty early, particularly though high school. If the habit starting in ninth grade is, well, this is a dumb assignment, I can outsource this to GPT, then that’s a lifelong feeling about outsourcing as opposed to integrating its use. In an ideal world, ninth grade to me would be a good time to start to see AI as a co-partner, and a fallible one just as we are. But absolutely, I believe that we should be integrating AI into first-year college education. Let’s go back to some of the thinking about the challenges and the ethics. What are your top challenges or concerns that you have with the current state of LLMs? The current state of LLMs in terms of the technology itself or its use? Use, sorry. My number one concern is inconsistency. If you have five classes and three faculty members say, shame, shame, you should never touch it and if you do, you’re going to fail. And two classes where they don’t, it looks like an artificial game. Well, it looks artificial. Just as I was at the forefront of arguing that if they want to bring their devices to class they should because they’re grown-ups and they have to learn how to deal with this, I also believe that they should be using AI and that we should be helping them figure out how to do that well and that’s hard. Second concern, faculty don’t have time to learn how to use it. I think this inconsistency of policy has to do with the fact that we’re pretty overloaded and there is currently no standard education for faculty. We fear looking stupid, just like everybody else does, and starting as a beginning learner with something like this is kind of terrifying. Just beginning to understand how it works would be a really useful equalizer for faculty and I think that would begin to address that first problem. I am definitely concerned about the privacy issues also. I do weigh morally requiring students to use ChatGPT and having to warn them anything you generate is going to be sucked back into ChatGPT. On the other hand, anyone using Gmail has already been through that. Bard has used all of their Gmail. Anybody who’s using Google has already been through that. To some degree, I’m part of the problem in causing them to generate things that will clearly then be sucked up into the Cloud. But at the same time, I think it’s a good opportunity to talk about it. To talk about what you don’t want to put in ChatGPT. To my absolute horror, last fall, I discovered that students had started to use it as a personal mental health coach and they were telling it things and then asking it for advice. I realize that therapists are scarce and expensive, but just explaining what you don’t want to put into ChatGPT, and they already knew it about Google, that was what was so odd. It wasn’t as if they had no idea about privacy, it was that new technology, new format, learn again. I think inconsistency of expectation is one, privacy is another, and possibly, my fear that if we don’t do this right, 10 years from now, people won’t value writing as a tool for thinking. My fear is that we will reinforce the idea of writing as product instead of writing as process. The less people externalize their thoughts and see them, the less they know themselves, so it’s a Socratic problem. If you intend to know yourself, one of the best ways to do that is through conversation and actually, the most precise way to do that is to converse with yourself through writing. That whole privacy and the concerns there, it’s interesting. Exactly, new technology you have to relearn it, you don’t even associate those type of concerns. That’s compounded by especially something like mental health, where ChatGPT [inaudible] is getting better, but it might just spit out wrong information, and that could be a very dangerous path as well, and it’s very confident, as you said earlier. There have already been some documented cases of people who should have called a suicide hotline being told, “Yeah, you’re right, there is no meaning in life.” Wait a minute. Not what somebody needs to hear in the middle of the night. Going back to your first point about that inconsistency, what advice would you offer to those faculty members, those instructors who are hesitant to use it in their class? That have those more draconian type of “you’re not going to be using this. If we suspect you’ll be reported to the honor board” and things like that. What advice would you give or suggestions that they could think about? Well I think actually discussing citation. Being clear that you hope that they’ll write this without an LLM, but then if they do, that LLM is a co author, and needs to be credited for its work, and you need to be credited for your work. Even a simple format saying this paper was co-authored by, your name, and ChatGPT version in parren, or any other LLM. I submitted the following prompts. Insert here. I edited and or adapted ChatGPT suggestions in the following ways. Insert here. Even some baseline guideline, so that students have to think about it. Because honestly right now we’re looking at the possibility of a future pitched battle. Students are beginning to figure out that Turnitin.com, is not particularly accurate. All it’s going to take is a lawsuit. I’m amazed that I haven’t seen one already. Where a student who has been penalized for using a tool that they haven’t used makes us change the way we do business. But I think we live in the world, and as much as ivory towers or ivory towers, particularly to state university, where we are meant to serve the state, and we are meant to serve the workforce. We have to be cognizant of the fact that this is a skill that’s needed. I believe it’s a fundamental responsibility to learn enough to communicate it well. If we care about producing thoughtful moral citizens for the state of Colorado, then we need to care enough about showing them how to be that. It really is a disservice to students, and thinking about preparing them for what comes after higher ed, to not train them and prepare them. What excites you about where generally AI in education is heading, and what worries you? This overlaps with what we’ve talked about before, but just curious to hear your thoughts there. What worries me, I’ll just start there. Is that higher education institutions are going to become even more irrelevant than many people already say they are. The more we divorce ourselves from what is going on in the real world, the less we can justify our existence as well. CU isn’t very much taxpayer money, but the harder it is to justify our existence as a public good. If our argument is we teach people how to read the world, we teach people to know themselves, we produce thoughtful, self-knowledgeable moral citizens that are good for the nation, then we need to do that. I worry about relevance. I also worry about a generational cutoff where lots of our experts in many fields, who are published scholars who are highly respected, are going to find it harder and harder to teach and assess student work. Somehow we need to bridge that gap. The 20-year-olds coming in are not having a problem. I’m at the higher end of that range, but I also love technology and I love games and I like a challenge. I see it is very doable. But that’s my biggest fear. Now the citing part. A novel I read, Neil Stevenson, who wrote Snow Crash, and Diamond Age, and a bunch of other novels in the ’90s that have been mined for decades by Silicon Valley for commercial ideas. Had an AI assistant in Snow Crash called a librarian who accompanies you throughout your life, knows all of your interests and foibles, generates stuff for you while you’re not looking, knows when to present you with new information. I’m very excited about the idea of a learning companion who might start in elementary school and become richer and richer as you age. Even as a reminder figure later on, I’ve hit the point where I’m starting to forget a few things, and to be able to say, I remember reading that book and really enjoying it. Can you remind me what I said about it? For me, that means conversational AI, and it means massive amounts of storage, and it means a firewalled AI, so not feeding every single thing into the Cloud. Also to some degree that’s Vannevar Bush’s memex, except that was microfilm. This is being able to remember through conversation as one would with a lifetime friend. That’s one. Having that be a learning companion is, I think, really important. It doesn’t replace a conversation in a classroom. It doesn’t replace interacting with an expert in the field. But it is yet another scaffolding point. The other book that he wrote, The Diamond Age, has something called The Young Lady’s Primer. That is a book that is multimedia and that accompanies a child throughout their life in the same way. In his book, there was no AI. It was actors acting, which is pretty expensive. But the idea that we might have a multimedia resource, that is who knows what the interfaces will look like if it’s holographic, if it’s on your wall, if it’s projected, if it’s in your glasses, but they will have some ability to interface constantly with an AI but also the ability to turn it off. There needs to be an off button. This is why neural link terrifies me. The idea that you would implant something that you can’t turn off so violates my sense of humanity. There it is. When I see the possibility, I think of AI becoming a lifelong learning partner and later in life an elder care partner who can tell you the stories again, who can recommend something to read because you loved it when you were a kid. That would accompany not replace formal education. No. It would just be right there with you and just a part of that process. That is an exciting future to think about. Reflecting on the past year and a half or so where you’ve integrated it in your class, and watching your students interact with and learn from it, grow with it. Are there any final thoughts, or recommendations, or considerations that you would just like to voice? My final thought would be that this is an excellent opportunity to reinvent oneself as a teacher and as a learner. Therefore, it’s not scary. Yes, it’s a little scary. But it’s not a threat, it’s an opportunity. I’ve been teaching in higher ed for 30 years this year. Anything that can stop me and cause me to think, how can I do this much better, more effectively, how can I support learning better, how can I do my job in a way that gives me personal satisfaction better, that’s a good thing. To some degree, generative AI has been a mirror to look into at this time and figure out why am I actually teaching? What do I really want students to be doing? How am I really assessing? This is a great thing. This is such a welcome opportunity to think through the ways in which I can do better.