Best Practices for Using AI in Scientific Manuscripts (ACS Nano 2023)
Document Details
Uploaded by ImpressiveSwamp3613
University of Dongola
2023
Tags
Summary
This paper discusses best practices for using AI, specifically ChatGPT, when writing scientific manuscripts. It highlights the strengths and weaknesses of AI language models for scientific communication, emphasizing the importance of critical evaluation and human oversight in scientific endeavors. This paper considers the potential downsides of overreliance on AI, such as hindering creativity, and suggests guidelines for responsible use in academic and scientific research.
Full Transcript
Editorial www.acsnano.org Best Practices for Using AI When Writing...
Editorial www.acsnano.org Best Practices for Using AI When Writing Scientific Manuscripts Caution, Care, and Consideration: Creative Science Depends on It Cite This: ACS Nano 2023, 17, 4091−4093 Read Online ACCESS See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles. Metrics & More Article Recommendations S cience is communicated through language. The media of language in science is multimodal, ranging from ChatGPT is deficient due to its lack of analytical capabilities that scientists are expected to possess and the experiences that Downloaded via 41.209.87.218 on December 12, 2024 at 12:56:20 (UTC). lecturing in classrooms, to informal daily discussions inform us. among scientists, to prepared talks at conferences, and, finally, The most important concern for us as scientists is that these to the pinnacle of science communication, the formal peer- AI language bots are incapable of understanding new reviewed publication. The arrival of language tools driven by artificial intelligence (AI), like ChatGPT,1 has generated an AI language bots are incapable of explosion of interest globally. ChatGPT has set the record for understanding new information, gen- the fastest growing user base of any application in history, with over 100 million active users in just two months, as of the end erating insights, or deep analysis, of January 2023.2 ChatGPT is merely the first of many AI- which would limit the discussion within based language tools, with announcements of more either in a scientific paper. preparation or soon to be launched.3−5 Many in scientific research and universities around the world have raised concerns of ChatGPT‘s potential to transform scientific information, generating insights, or deep analysis, which communication6 before we have had time to consider the would limit the discussion within a scientific paper. While ramifications of such a tool or verified that the text it generates appearing well formulated, the results are, however, superficial, is factually correct. The human-like quality of the text structure and over-reliance on the output could squelch creativity produced by ChatGPT can deceive readers into believing it is throughout the scientific enterprise. AI tools are adequate for of human origin.7 It is now apparent, however, that the regurgitating conventional wisdom but not for identifying or generated text might be fraught with errors, can be shallow and generating unique outcomes. They might be worse at assessing superficial, and can generate false journal references and whether a unique outcome is spurious or ground-breaking. If inferences.8 More importantly, ChatGPT sometimes makes this limitation is true for ChatGPT and other language connections that are nonsensical and false. chatbots under development, then it is possible that reliance We have prepared a brief summary of some of the strengths upon AI for this purpose will reduce the frequency of future and weaknesses of ChatGPT (and future AI language bots) disruptive scientific breakthroughs. This is concerning since a 2023 article has already concluded that the frequency of such and conclude with a set of our recommendations of best disruptive scientific breakthroughs is on a negative trajectory.13 practices for scientists when using such tools at any stage of Scientific research is becoming less disruptive think more their research, particularly at the manuscript writing stage.9,10 cookie cutter and less CRISPR. It is important to state that even among the authors here, there is a diversity of thought and opinion, and this editorial reflects the middle ground consensus. In its current incarnation, ChatGPT is merely an efficient language bot that generates STRENGTHS OF THE ChatGPT LANGUAGE BOT An AI-driven language bot can (i) help to break mental log jams when writing, or when text by linguistic connections.11 It is, at present, “just a giant struggling to write those first words. Having some text to start autocomplete machine”.12 Since ChatGPT is the first of many with can enable a writer to overcome an activation barrier to models that will undoubtedly improve rapidly, within a few years we will almost certainly look back at ChatGPT like an old computer from the 1980s. It must be recognized that Published: February 27, 2023 ChatGPT relies on its existing database and content and, at the time of writing of this editorial, fails to include information published or posted after 2021, thus restricting its utility when applied to the writing of up-to-date reviews, perspectives, and introductions. Therefore, for reviews and perspectives, © 2023 American Chemical Society https://doi.org/10.1021/acsnano.3c01544 4091 ACS Nano 2023, 17, 4091−4093 ACS Nano www.acsnano.org Editorial productivity. That said, be aware that this starting point might (vi) inherit the built-in biases and falsehoods intrinsic to the mentally pin you to a certain way of thinking and writing, so do scientific enterprise. It can suppress minority views that not let this text limit your creativity and insights. A better question or oppose a well-established concept or explanation approach might be to use ChatGPT after completing a draft of of a scientific phenomenon, or overlook works with fewer your manuscript to provide a complementary perspective to citations arising from intrinsic biases.16 determine if key topics or points were missed, and to spark (vii) generate text that is not forward-looking, as it might new ideas and directions. summarize the consensus without user intervention. Introduc- (ii) make interesting analogies, when properly prompted, tions and review papers that are based solely upon the output and generate seemingly creative links between disparate of ChatGPT will lack thoughtful insights on where a field is concepts and ideas, although these require a reality check to headed. ensure that they are reasonable or plausible. (viii) lead to an increase of submissions of perspectives, (iii) be used effectively to improve the title, abstract, and accounts, and reviews that lack nuance in the storyline and conclusion of your manuscript and to tailor it to match the forward-looking discussion since these manuscript types can be journal parameters and better match its scope or readership. easily generated by ChatGPT with the existing information. (iv) identify references for a specific topic that might be (ix) generate output that is incorrect or recently shown to be missed by conventional literature searches. Reading and false.8,17 Outputs can also be manipulated to support including such references can enrich your understanding of a arguments with tailored prompts. topic area, but they must be carefully read or scanned to ensure (x) present major challenges with regards to the reporting of that they are correct and relevant. clinically relevant findings that require transparency in (v) provide a guidance on writing structure by breaking up a outcomes reporting, clear communication of trial designs, difficult topic into smaller pieces. However, the bot might and other information.18 Given the important role that make poor suggestions, so caution is required when doing so. publications can play in reporting clinically actionable findings (vi) level the playing field by facilitating composition by that can drive practice change, the use of ChatGPT in these non-native English speakers. This language bot and others will circumstances might require substantial oversight and dis- almost certainly be included directly in other interfaces, such as closure. Microsoft Office 365.14,15 (vii) help a writer be more thorough when covering a topic by reminding them of aspects they had not considered. (viii) provide knowledge in an area in which one has little familiarity, in a structured, easy-to-digest manner. However, one must keep in mind that the output might be incomplete or lacking in creative insights, as will be described below. (ix) develop code for Python and other computer languages. CONCERNS REGARDING THE ChatGPT LANGUAGE BOT An AI-driven language bot might (i) be fast and easy to use, but perhaps too easy if one fails to use it responsibly and with care. (ii) be used to write and replace critical thinking and thorough literature reviews to the detriment of the user. In the case of students, the writing of their first manuscripts is a transformative training experience. Over-reliance on these language bots deprives them of this opportunity, limiting their intellectual growth and confidence. (iii) lead to banal, cookie-cutter and uninteresting science if not used as only a jumping-off point for creative science. AI tools are typically good at regurgitating conventional wisdom, but weak when it comes to identifying and generating unique outcomes. They can be even worse at judging whether a unique outcome is spurious, anomalous or groundbreaking. (iv) be used without reading the actual papers that support claims made by the author. As mentioned earlier, ChatGPT can invent references or spurious correlations. The output of the AI model cannot be taken at face value; all outputs need to be subjected to critical review to prevent errors, missing key information, or making unrelated claims. ChatGPT might be more likely to generate incorrect information if the available data is incomplete or outdated. (v) fail to provide both sides of controversial topics, To conclude, science operates upon an honor system. While particularly without user input. ChatGPT cannot express there are now tools to identify text generated by ChatGPT,20 disruptive concepts. these AI-language bots will continue to improve, both in terms 4092 https://doi.org/10.1021/acsnano.3c01544 ACS Nano 2023, 17, 4091−4093 ACS Nano www.acsnano.org Editorial of their performance and sophistication, and thus scrutinizing their use will be increasingly difficult. Please use these tools AUTHOR INFORMATION with extreme care and remind your colleagues and co-authors Complete contact information is available at: of the concerns and best practices when writing your own https://pubs.acs.org/10.1021/acsnano.3c01544 manuscripts. Ultimately, because scientific papers rely on Notes Views expressed in this editorial are those of the authors and Ultimately, because scientific papers not necessarily the views of the ACS. rely on human-generated data and interpretations, the scientific story re- quires creativity and knowhow that will REFERENCES (1) https://openai.com/blog/chatgpt/ (accessed February 2, 2023). (2) ChatGPT Sets Record for Fastest-growing User Base - Analyst be difficult to replicate using AI-based Note; https://www.nasdaq.com/articles/chatgpt-sets-record-for- language bots. fastest-growing-user-base-analyst-note (accessed February 2, 2023). (3) China’s Baidu to Launch ChatGPT-style Bot in March - Source; https://www.reuters.com/technology/chinas-baidu-launch-chatgpt- style-bot-march-source-2023-01-30/ (accessed February 2, 2023). human-generated data and interpretations, the scientific story (4) Google is asking employees to test potential ChatGPT requires creativity and knowhow that will be difficult to competitors, including a chatbot called ‘Apprentice Bard’; https:// replicate using AI-based language bots. www.cnbc.com/2023/01/31/google-testing-chatgpt-like-chatbot- apprentice-bard-with-employees.html (accessed February 2, 2023). Jillian M. Buriak orcid.org/0000-0002-9567-4328 (5) Anthropic, an A.I. Startup, is Said to be Close to Adding $300 Deji Akinwande orcid.org/0000-0001-7133-5586 Million; https://www.nytimes.com/2023/01/27/technology/ Natalie Artzi orcid.org/0000-0002-2211-6069 anthropic-ai-funding.html (accessed February 2, 2023). C. Jeffrey Brinker orcid.org/0000-0002-7145-9324 (6) Alarmed by A.I. Chatbots, Universities Start Revamping How Cynthia Burrows orcid.org/0000-0001-7253-8529 They Teach; https://www.nytimes.com/2023/01/16/technology/ Warren C. W. Chan orcid.org/0000-0001-5435-4785 chatgpt-artificial-intelligence-universities.html (accessed February 2, Chunying Chen orcid.org/0000-0002-6027-0315 2023). Xiaodong Chen orcid.org/0000-0002-3312-1664 (7) Abstracts Written by ChatGPT Fool Scientists; https://www. Manish Chhowalla orcid.org/0000-0002-8183-4044 nature.com/articles/d41586-023-00056-7 (accessed February 2, Lifeng Chi orcid.org/0000-0003-3835-2776 2023). (8) Did ChatGPT Just Lie To Me? https://scholarlykitchen.sspnet. William Chueh orcid.org/0000-0002-7066-3470 org/2023/01/13/did-chatgpt-just-lie-to-me/ (accessed February 2, Cathleen M. Crudden orcid.org/0000-0003-2154-8107 2023). Dino Di Carlo orcid.org/0000-0003-3942-4284 (9) Tools such as ChatGPT threaten transparent science; here are Sharon C. Glotzer orcid.org/0000-0002-7197-0085 our ground rules for their use; https://www.nature.com/articles/ Mark C. Hersam orcid.org/0000-0003-4120-1426 d41586-023-00191-1 (accessed February 2, 2023). Dean Ho orcid.org/0000-0002-7337-296X (10) ChatGPT is Fun, But Not an Author; https://www.science. Tony Y. Hu orcid.org/0000-0002-5166-4937 org/doi/10.1126/science.adg7879 (accessed February 2, 2023). Jiaxing Huang orcid.org/0000-0001-9176-8901 (11) ChatGPT has Convinced Users that It Thinks Like a Person. Ali Javey orcid.org/0000-0001-7214-7931 Unlike Humans, it has No Sense of the Real World; https://www. Prashant V. Kamat orcid.org/0000-0002-2465-6819 theglobeandmail.com/opinion/article-chatgpt-is-a-reverse- mechanical-turk/ (accessed February 2, 2023). Il-Doo Kim orcid.org/0000-0002-9970-2218 (12) Grimaldi, G.; Ehrler, B. AI; et al. Machines are About to Nicholas A. Kotov orcid.org/0000-0002-6864-5804 Change Scientific Publishing Forever. ACS Energy Lett. 2023, 8, 878− T. Randall Lee orcid.org/0000-0001-9584-8861 880. Young Hee Lee orcid.org/0000-0001-7403-8157 (13) Papers and Patents are Becoming Less Disruptive Over Time; Yan Li orcid.org/0000-0002-3828-8340 https://www.nature.com/articles/s41586-022-05543-x (accessed Luis M. Liz-Marzán orcid.org/0000-0002-6647-1353 February 2, 2023). Paul Mulvaney orcid.org/0000-0002-8007-3247 (14) Microsoft Is Looking to Add ChatGPT To Office 365; https:// Prineha Narang orcid.org/0000-0003-3956-4594 www.forbes.com/sites/quickerbettertech/2023/01/15/microsoft-is- Peter Nordlander orcid.org/0000-0002-1633-2937 looking-to-add-chatgpt-to-office-365and-other-small-business-tech- Rahmi Oklu orcid.org/0000-0003-4984-1778 news-this-week/?sh=65366dfb6f96 (Accessed February 2, 2023). (15) Microsoft Bets Big on the Creator of ChatGPT in Race to Wolfgang J. Parak orcid.org/0000-0003-1672-6650 Dominate A.I.; https://www.nytimes.com/2023/01/12/technology/ Andrey L. Rogach orcid.org/0000-0002-8263-8141 microsoft-openai-chatgpt.html (Accessed February 2, 2023). Mathieu Salanne orcid.org/0000-0002-1753-491X (16) Lerman, K.; Yu, Y.; Morstatter, F.; Pujara, J. Gendered Citation Paolo Samorì orcid.org/0000-0001-6256-8281 Patterns Among the Scientific Elite. Proc. Nat. Acad. Sci. 2022, 119, Raymond E. Schaak orcid.org/0000-0002-7468-8181 206070119. Kirk S. Schanze orcid.org/0000-0003-3342-4080 (17) Disinformation Researchers Raise Alarms about A.I. Chatbots; Tsuyoshi Sekitani orcid.org/0000-0003-1070-2738 https://www.nytimes.com/2023/02/08/technology/ai-chatbots- Sara Skrabalak orcid.org/0000-0002-1873-100X disinformation.html (Accessed February 9, 2023). Ajay K. Sood orcid.org/0000-0002-4157-361X (18) For example, ensuring compliance with Consolidated Standards Ilja K. Voets orcid.org/0000-0003-3543-4821 of Reporting Trials (CONSORT) guidelines. (19) ChatGPT Listed as Author on Research Papers: Many Shu Wang orcid.org/0000-0001-8781-2535 Scientists Disapprove; https://www.nature.com/articles/d41586- Shutao Wang orcid.org/0000-0002-2559-5181 023-00107-z (Accessed February 2, 2023). Andrew T. S. Wee orcid.org/0000-0002-5828-4312 (20) https://gptzero.me/. Jinhua Ye orcid.org/0000-0002-8105-8903 4093 https://doi.org/10.1021/acsnano.3c01544 ACS Nano 2023, 17, 4091−4093