The Advent of the Information Age PDF
Document Details
Uploaded by DecentVignette
Tags
Summary
This document provides an overview of the Information Age, discussing its historical background, key innovations such as the printing press and the internet, and the societal impact of new communication technologies. It details advancements in the 2010s, focusing on mobile communication, social media, artificial intelligence, and cloud computing.
Full Transcript
# The Advent of the Information Age The Information Age, also known as Computer Age, Digital Age, or Age of New Media, is a historical period beginning in mid-20th century, whose defining characteristic is a rapid shift from traditional industries to an economy driven by the growth of information,...
# The Advent of the Information Age The Information Age, also known as Computer Age, Digital Age, or Age of New Media, is a historical period beginning in mid-20th century, whose defining characteristic is a rapid shift from traditional industries to an economy driven by the growth of information, which can be easily accessed through both traditional media (e.g, newspaper, television, and radio) and the manipulation of information through the computer and computer networks. ## The Beginning of the Information Age John Waters, President and Creative Director of Waters Design Associates, Inc., describes the Information Age as a time when information got ahead of humankind and grew at a speed that we were unprepared to handle. The beginning of the Information Age was set up through the invention of the transistor in 1947 by American physicists John Bardeen and Walter Brattain, and the optical amplifier in 1957 by American physicist Gordon Gould. Both were necessary for the development of computing and computers and fiber-optic communications, setting the stage for the explosion of information through their efficient methods of transmitting information. ## The Building Blocks However, the building blocks of the Information Age were farther back in the past. These building blocks include the development of writing systems across ancient civilizations, such as that of the ancient Sumerian cuneiform from 3000 B.C., the ancient Egyptian hieroglyphics from 2900 B.C., and the ancient Chinese small seal script during the Qin Dynasty in 200 B.C. Ancient antecedents of the modern book also contributed to the unfolding of the Information Age. These include the ancient Egyptian papyrus roll around 500 B.C., the parchment codex of the Roman Empire around 100 A.D., the ancient Chinese wood-block printing and paper in 105 A.D. ## The Printing Press From manual, tedious, and slow ancient printing methods, in which the cloth, paper, or other medium was brushed or rubbed repeatedly to complete the transfer of ink, the printing press was invented by German goldsmith Johannes Gutenberg in 1440. The printing press is a device that applies pressure to an inked surface lying on a medium (i.e., cloth or paper to transfer the ink). Gutenberg's hand mold printing press led to the creation of the metal movable type. Later, the two inventions were combined to make the printing faster and drastically reduced the printing costs of documents. From this invention also, a new branch of print media was introduced and was known as "the press." ## The Rise of Computing As industries progressed, communications needed computers--individuals who compiled actuarial tables and did engineering calculations. During World War II, the Allies (Great Britain, the United States, and the Soviet Union)--countries that opposed the Axis powers (Germany, Japan, and Italy)--were challenged by a serious shortage of human computers for military calculations. When soldiers left for war, the shortage got worse, so the United States mechanized the problem by building the Harvard Mark 1, a 50-feet long electromechanical computer, which was capable of doing calculations in seconds that took people hours. At the same time, the British also needed mathematicians to crack the German navy's Enigma code. The Enigma code was used by the Germans to transcribe their messages in encryption using a machine called Enigma which looked like an oversized typewriter. ## The Enigma Code Alan Turing, an English mathematician, was hired in 1936 by the British top-secret Government Code and Cipher School at Bletchley Park to break the Enigma code. His code-breaking work became an industrial process having 12,000 people working three shifts day in and day out. To counter act this, the Nazis had made the Enigma machines more complicated having approximately 1014 possible permutations. Turing designed Bombe, an electromechanical machine that allowed the British to read all daily German naval Enigma traffic by searching through the permutations. This contribution of Turing and the British team at Bletchley saved millions of lives since the invention shortened the war by as much as two years. ## The Apple I Around the 8080-microprocessor hooked up to a keyboard and television in 1976. Then, his friend Steve Jobs called the computer Apple I and sold replicates of this machine to a Silicon Valley shop. A year earlier, Bill Gates realized that PCs needed software and sold his Microsoft programs. ## The Internet The Internet was developed during the 1970s by the United States Department of Defense, mainly by scientists to communicate with other scientists. One early problem faced by Internet users was with speed. Phone lines could only transmit information at a limited rate. The development of fiber-optic cables allowed for billions of bits of information to be received every minute. Companies like Intel developed faster microprocessors so personal computers could process the incoming signals at a more rapid rate. ## The World Wide Web In the early 1990s, the World Wide Web was developed largely for commercial purposes. Corporations created home pages where they could place text and graphics to sell products. Shortly, airline tickets, hotel reservations, books, and even cars and homes could be purchased online. Colleges and universities posted research data on the Internet so students could find valuable information without leaving their dormitories. Companies soon discovered that work could be done at home and submitted online, so a whole new class of telecommuters began to earn a living from home offices. ## The New Forms of Communication New forms of communication were introduced. Electronic mail, or E-mail, was a convenient way to send a message to associates or friends. Messages could be sent and received at the convenience of the individual. A letter that took several days to arrive could be read in minutes. Internet service providers set up electronic chat rooms. These were open areas of cyberspace where interested parties could join in a conversation with strangers. Advocates of the Internet cited its many advantages. The commercial possibilities were limitless. Convenience was greatly improved. Chat rooms and e-mail allowed individuals to converse who may never have had the opportunity in the past. Educational opportunities were greatly enhanced because of the wealth of knowledge now placed at the fingertips of any wired individual. Surfing the net became a pastime in and of itself. Very quickly, from 1973 onwards, the Internet paved the way for social networking and gave rise to the creation of various social media platforms, which served a plethora of functions, such as instant-messaging applications, conferencing and bulletin-board forum system, exchanging e-mails, game-based social networking, business-oriented social networking, messaging, video and voice calling service, blogging, and image and video hosting. ## The Anti-Internet Sentiments With the Internet's fast development, anti-Internet sentiments also became rampant. Critics charged that the Internet created a technological divide that increased the gap between the haves and have-nots. Those who could not afford a computer or a monthly access fee were denied these possibilities. Many decried the impersonal nature of electronic communication compared to a telephone call or a handwritten letter. Hate groups were using the Internet to expand their bases and recruit new members. The unregulated nature of the Internet allowed pornography to be broadcast to millions of homes. Protecting children from these influences, or even from meeting violent predators would prove to be difficult. ## Evolutions of The Information Age in the 2010s In the 2010s, the Information Age saw even greater leaps, so much that it came to be known as a "decade of disruption." These were made possible through significant technological advancements and societal shifts, which occurred in the 2010s. These shifts continued to capitalize on sophisticated information theory and technologies, which were established during the early years of the period. In the 2010s, mobile communication upgraded from 3G to 4G networks. If 3G networks kicked off an age of calling, texting, and Internet connectivity for mobile phone users, the introduction of the 4G network offered Internet speeds up to ten times than that of its predecessor. Download speeds increased from 1.5 to 15 megabits per second (Mbit/s). This development made it possible to download an 800MB video from five hours in a 3G network to 43 seconds in 4G. It is this transition from 3G to 4G, which sped up mobile application technology so that people are able to do even more on their mobile devices and away from their desktop computers. This evolution is most strikingly evidenced by the popularity of streaming services, such as Netflix and Spotify, mobile shopping on e-commerce sites, such as Lazada and Shopee, and the use of social media on the go. As such, industries such as smartphone manufacturing, social media, e-commerce, and streaming services all benefited greatly from this Information Age leap of the 2010s. This also translated to a drastic increase on the daily time spent by users on their mobile devices--what was only 32 minutes in 2011 became 132 minutes in 2019. ## Social Media Social media also brought people's lives "online" in the 2010s. During the time of desktop computers, users went to social media platforms to log events that happened in the past day, week, or month. Platforms were like virtual scrapbooks, where users uploaded their collection of experiences after the fact. Friendster was the world's most popular social network in the 2000s, where users could contact other users, maintain those contacts, and share and leave online content and media, such as testimonials. But, as the social media landscape changed and mobile devices became more common, social networking became a more instantaneous broadcast of life events and experiences. Think about the possibilities with social media platforms of the 2010s, such as Facebook, Twitter, and Instagram. Individuals can do so much more, most of which were constraints of the 2000s social networks. Moreover, the social media of the 2010s were no longer just for individual users, they have become sites for companies, news organizations, and governments to efficiently connect with their audience and communicate information in real time. Social media transformed into one-stop platforms for information, which captured the attention and preoccupied a significant amount of the user's time. Advertisers took notice of this shift and, in turn, poured more of their advertising spending on social media platforms and search engines and less on traditional media. In fact, in 2019, digital advertising overtook traditional advertising. ## Artificial Intelligence Artificial Intelligence (AI) and big data science and technology also took off in the last decade. As users transact online -- making payments, searching and viewing online, and using mobile apps -- gigabytes of digital trail is left behind, which can be mined and sifted through to spot patterns, create instructions for search engines, offer real-time directions, and even geo-target users for digital advertising. Today, it is not uncommon to find that an advertising material pops up on your social media timeline after a recent search elsewhere, say on Google. Geotagging, or the process of appending geographic coordinates on the location of a mobile device, has become very common on the Internet. All these have become possible in the 2010s because of the emergence of AI algorithms, which continued to learn and advance their capabilities of understanding more complex and prolific datasets (i.e., big data). ## Cloud Computing E-commerce, social media, and search engines are the primary beneficiary of this shift. Between 2010 and 2019, the volume of data stored on the Internet has exploded from only two zetabytes to 41 zetabytes. Moreover, through AI algorithms, from only 9% of the 2010 data, 13% of 2019 data were made available in structured formats, making it easy to index, tag, or reference them for various purposes. This has opened a gold mine of data which advanced the frontiers of information science, marketing and advertising, and communications. Business efficiency and process streamlining reached new heights as cloud computing enabled organizations and firms to conduct critical activities in digital environments. The software industry, which is the primary beneficiary of this shift, quickly migrated from physical storage to the new business model of software-as-a-service. In this business model, companies earn by collecting subscriptions revenues and offer real-time software updates, remote access, and centralized data storage. The growth of cloud storage is unprecedented in the last decade. In 2010, over 90% of all data was stored in local servers; but, by 2019, public cloud storage took around 30% of this share and is expected to take over both consumer and enterprise cloud storage in the next decade. ## 2020 Information Age Shifts As the world continues to push the boundaries of information science, more evolutions in information science and technology are expected to drive industries in the 2020s and beyond. Interestingly, we find ourselves in the middle of some of these shifts as they happen in this decade. If the 2010s saw the shift from 3G to 4G networks, causing internet speeds faster than ever, the 2020s will see the 4G base technology evolve into 5G, which would mean even faster internet speeds. 5G networks will then allow even more sophisticated functions and processes using the Internet. Even now, the offshoots of 5G technology are seen in the adoption of connected devices in homes, cities, and enterprises. Smart cities will be more common as sectors and industries in society move to connect devices to gather and analyze real-time data, which will make processes more efficient, with downtime reduced and free time available to attend to new products and initiatives. Although some may, in fact, already be enjoying 5G connections, especially in city centers and business districts in the country, the trajectory is towards the mass adoption of this technology so that every possible industry or enterprise can benefit from it. For this to happen, capital and investment coupled with supportive government policies will be required. In turn, manufacturers of connected devices, semiconductor companies, network providers, and even telemedicine can benefit greatly from 5G technology. In this decade, cloud computing is expected to evolve into edge or edge computing, which is the deployment of computing and storage resources right at the data source, without the need to transmit data across a large digital network. This shift is driven by the continued decline of computing costs, network infrastructure improvements, and more sophisticated AI algorithms. The proximity to data which edge computing offers facilitates faster insights, improved response times, and better bandwidth availability, delivering stronger business benefits. From the wearables on your wrist to drone-enabled crop management, edge computing taps on the benefits of instantaneous data interpretation. Through edge computing, the age of robotic surgery, autonomous vehicles, gaming, and smart factories are expected to reach new heights in the 2020s. For it to flourish, a robust local digital infrastructure and extensive 5G buildout across a geographic location will be required. In terms of industries that will benefit, foremost will be cloud infrastructure companies, device makers, AI developers, smart factories, the streaming gaming industry, autonomous vehicle producers, and robotic surgery. ## The Future of 5G Technology Healthcare will go digital in the 2020s through AI diagnoses and telemedicine. The world's aging population will cause increasing pressure to make healthcare more efficient. The need to reduce costs, improve patient outcomes, and optimize physicians' time bring AI and telemedicine together to reduce disruptions to current healthcare standards across the world. The potential of telemedicine was realized during the outbreak of the COVID-19 pandemic. When in-person doctor and hospital visits were impossible, telemedicine was the way to go for doctors and healthcare professionals to deliver diagnostic and clinical services to patients around-the-clock and despite barriers brought by physical and social distancing and geographic distance. However, as it is elsewhere in the world, telemedicine in the Philippines has a very low utilization rate. The healthcare industry has yet to see how the full extent of the impact of COVID-19 on the attitudes, acceptance, and actual utilization of this emerging trend. All around the world, nonetheless, telemedicine has a good outlook as the penetration of new technologies promises to reduce costs and add convenience to both patients and doctors. Alongside this, AI technologies will further push the digitalization of healthcare and welcome new AI-driven developments, such as real-time MRI interpretation and AI chatbots for triage and basic medical services. ## The Information Age and the Fourth Industrial Revolution The evolutions of science and technology during the Information Age, particularly in the last two decades, led to the argument that the world may well be in the midst of the Fourth Industrial Revolution, also called 4IR or Industry 4.0. The German engineer, economist, and founder of the World Economic Forum (WEF) Klaus Schwab popularized the term, which is now used widely in scientific literature to refer to a period where changes are not just mere improvements to efficiency, but a shift in industrial capital brought about by the fusion of the latest developments in AI, robotics, the Internet of Things, 3D printing, gene editing, and quantum computing. Industry 4.0 is thus described as a period marked by the blurring of the divides between the physical, digital, and biological worlds. As Industry 4.0 increases operational efficiency between previously hardly overlapping environments, four key themes will be the hallmark of present industries: 1. **Interconnection** - Machines, devices, sensors, and people will be able to seamlessly connect and communicate with each other through the Internet of Things (IoT), or the Internet of People (IoP). 2. **Information transparency** - As information becomes more accessible and transparent, Industry 4.0 technology will provide stakeholders with comprehensive information to make decisions. 3. **Technical assistance** - Industry 4.0 will also establish the technological prerequisites of systems able to assist humans in decision-making and problem-solving and help them with difficult or unsafe tasks. 4. **Decentralized decisions** - Cyber physical systems of Industry 4.0 will be capable of independent decision-making and autonomous performance of tasks. As additional changes in the 2020s compound the phenomenal innovations of the 2010s, information sciences and technologies are expected to flourish and find new niches in previously unimagined industries -- Industries 4.0. Autonomous vehicles, conversational AIs, the Internet of Things, and augmented and virtual reality have their feet at the doorstep of the future. As these information technologies become more and more common, profound changes in people's lives are also expected to take place. These prospects of a 4IR-driven future, where information plays a pivotal role in everyday living, bring forth not only advantages but also disadvantages, some of which shall be this generation's greatest moral and ethical dilemmas. ## Four Ethical Issues of The Information Age As early as 1986, a time when the Information Age was only at its infancy, the American professor and science philosopher Richard O. Mason sounded the call for the ethical use of information. In his article published at the MIS Quarterly, Mason proposed four major issues of information ethics of the Information Age. Interestingly, more than three decades after Mason first called the world's attention, these issues are ever more relevant and cover a great deal of all present-day moral and ethical dilemmas of the Information Age. The issues go by the acronym PAPA (Privacy, Accuracy, Property, and Accessibility) and each comes with critical questions that should guide individuals in the ethical use of information. 1. **Privacy** - Mason explains that citizens must exercise caution in using information technologies. He pinpoints two threats to privacy, which, although written as contextualized in 1986, still strike a familiar chord to many who may have fallen prey to abuse of information and, with it, vicious attacks on the Internet. First, he points to the rapid growth of information technology, particularly its ever pervasive capacity for surveillance, communication, computation, storage, and retrieval. Second, and more insidious for Mason, the increased value of information in decision-making impels policymakers to covet information, which often comes at a price -- invading people's privacy. Thus, he poses the following critical questions in relation to the ethical issue of privacy. - What information about one's self or one's associations must a person reveal to others, under what conditions and with what safeguards? What things can people keep to themselves and not be forced to reveal to others? 2. **Accuracy** - Misinformation fouls up people's lives. This is exacerbated when the party holding inaccurate information are in positions of power and authority. Another burden on the accuracy of information is when people rely on information for matters concerning life and death. A recent example would be the spread of fake news about COVID-19, its behavior and effects on one's health, particularly during the onset of the pandemic. In the absence of accurate information, people became more susceptible to misinformation and, by extension, the health risks of the coronavirus. In this sense, the right to accurate information, when it may mean death or severe medical conditions to many people, is tantamount to healthcare rights. We can ask the same questions Mason raised in 1986 about the accuracy of information as well to the aforementioned example. - Who is responsible for the authenticity, fidelity and accuracy of information? Similarly, who is to be held accountable for errors in information and how is the injured party to be made whole? 3. **Property** - Mason states that the issue of property is probably the most complex of the four. Several economic and ethical questions surrounding intellectual property are hinged on the special attributes of information and the ways it is transmitted. Intellectual property is difficult to safeguard, because, unlike tangible property, once it is produced, it becomes communicable, and replication comes in handy without destroying the original, regardless of how difficult it is to produce it. Information is easily reproducible on the Internet. Thus, it becomes extremely challenging to reimburse the right-bearer when a third party uses their intellectual property. Thus, Mason poses these questions on the ethical issue of property in the Information Age. - Who owns information? What are the just and fair prices for its exchange? Who owns the channels, especially the airways, through which information is transmitted? How should access to this scarce resource be allocated? 4. **Accessibility** - It is no longer sufficient that individuals only have intellectual skills to navigate through information traffic. Reading, writing, reasoning, and calculating are tasks for education, but the demands of literacy in the Information Age compel stakeholders to It is very common nowadays to come across exaggerated and provocative headlines with excessive use of capital letters or emotional language. These are red flags. Most importantly, critically assess every information you come across online. If something seems to be too good to be true, or too weird, or too reactionary, chances are it is indeed. Again, be skeptical. ## Disinformation, Fake News, and The Post-Truth Era The United Nations Educational, Scientific and Cultural Organization (UNESCO) in its handbook titled **"Journalism, "Fake News" and Disinformation: A Handbook for Journalism Education and Training,"** defines disinformation as "information that is false and deliberately created to harm a person, social group, organization, or country" (Ireton & Posetti, 2018). In the handbook, disinformation is categorized as one of three types which are all often shared on social media. **Mal-information** is defined as information that is based on reality, used to inflict harm on a person, social group, organization, or country, while **misinformation** is information that is false but not created with the intention of causing harm. An example of disinformation is when a political candidate during election season posts false statistics with the intent of discrediting their competitor. Misinformation is when is when a person posts an article containing false information, but does not realize it. Mal-information is when someone posts a photo of victims of disaster without any context -- the photo depicts something that really happened before, but without any credible information to accompany the photo. This can ignite hatred towards the particular political, social, ethnic, or racial group concerned. Thus, in mal-information, there is a deliberate attempt to remove context in order that the audience will not be able to properly and credibly assess the information. UNESCO further distinguishes disinformation from misinformation in that the former is information that is false, and the person disseminating it knows that it is false. Thus, disinformation can be conceived as "a deliberate, intentional lie, and points to people being actively disinformed by malicious actors" (UNESCO, 2018, pp. 44-45). Meanwhile, in misinformation, the information is false, but the person disseminating it believes that it is true. The spread of misinformation can be traced to the advent of the Information Age and the Internet. As previously discussed, better information technologies and systems meant that information is stored, processed, and spread more rapidly than ever before. Often, this entails just a few clicks of a mouse. As information became valuable, people, groups, and organizations rushed to deliver news and information first. In an Internet-speed economy, the first to deliver news and information is often rewarded with more hits, views, and website traffic; and more of these means greater leverage -- be it economic or political gain. The world got caught in what the UNESCO calls a "perfect storm," a potentially catastrophic present where powerful institutions use tools and propaganda for their vested interests. ## The Larger Ecosystem of Ethical Dilemmas The larger ecosystem of ethical dilemmas concerning the use and distribution of information encompasses **fake news**. Fake news is the term given to a range of inaccuracies of information. At its core, however, fake news can be defined as news stories or articles that are false (i.e., the story itself is fabricated, without any supporting fact, source, or quotation to vet its veracity). Often, fake news are written or published to purposely influence views, promote political motives, damage one's reputation, or deceive the readers. However, the term "fake news" has been used to refer not only to false news stories and articles. It also pertains to those that may contain some truth but lack any context. It can also be those that contain verifiable facts but are written in a language that is intended to trigger hatred or doubt and leaves out important details, so that the audience is able to assess the entire breadth of the news or situation. In these cases, the term "fake news" overlaps with the three untruths: disinformation, misinformation, and mal-information. We live in an era where lies spread faster than truths. As the 2021 Nobel Peace Prize laureate and the first Filipino Nobel laureate Maria Ressa puts it, "We have coronavirus in the real world. Here, in the information ecosystem, you have the virus of lies." The Oxford University Press (OUP) says that, today, "objective facts are less influential in shaping public opinion than appeals to emotion and personal belief." This is how OUP defined the **Word of the Year 2016: "post-truth."** We live in a post-truth era, a time when the truth and objective facts--those that, according to Richard O. Mason, are difficult to produce--take the backseat and, in front, are lies, half-truths, and those that feel good or fit with one's personal biases. Post-truth is when no one believes anything unless they read it on Facebook or watch it on Tiktok - if it has been posted, it must be "true." The mounting distrust towards governments, institutions, and big media drive people away from traditional establishments of information--decades-long broadcast news companies, broadsheet newspapers, and offices of public officials--and towards the emerging and alternative sources. While in itself, there is nothing wrong with more and new sources of information, the current information ecosystem makes these new ones more susceptible to the virus of lies that quickly spread online. In the post-truth era, everything is just speculation and nothing is fact; nothing is true unless it feels good or fits a person's agenda or bias. The Philippines is not new to the moral and ethical dilemmas of the post-truth era. At the onset of the COVID-19 pandemic, misinformation became very prevalent. For example, the news that the COVID-19 virus, then called SARS-CoV-2, is a type of rabies, which originated from bats is false, since the virus that causes the disease belongs to the coronavirus family. The misinformation that boiled ginger cures COVID-19 was also refuted as no concrete scientific proof backs up the claim. At one point, the Department of Health (DOH) refuted a supposed advisory on fake cigarettes as a carrier for COVID-19 transmission. The DOH did not issue such advisory. Along the way, DOH Undersecretary Ma. Rosario Vergeire debunked the claim from a supposed Internet video insinuating that bananas cure COVID-19. This Internet video spread quickly as, at one point, then Presidential Spokesperson Salvador Panelo advocated for this. Vergeire said that, while bananas are a healthy food source, evidence about its effectivity against COVID-19 is yet to be determined. Aside from misinformation about COVID-19, the repercussions of the post-truth era in the Philippines can also be observed in the use of propaganda for political motives. In a three-part investigative report written in 2016, Ressa claims that propaganda techniques have to be deployed on the Internet to shift public opinion on prevailing social and political issues. In her report, she identified paid trolls, fallacious reasoning, leaps in logic, and poisoning the well as those techniques meant to spread FUD (fear, uncertainty, and doubt). The use of paid trolls as a propaganda technique was corroborated by a 2017 Oxford University study, which found evidence of the use of keyboard trolls to spread and amplify messages in support of government policies (Matsuzawa, 2017). This propaganda war has effectively used disinformation as a tool to weaponize the Internet and convince people of the need for certain economic and political policies. Even a recent Social Weather Station (SWS) survey reports that 70% of Filipinos believe that fake news is a serious problem in the country (Inquirer Research, 2022). Living in a post-truth era means greater responsibilities for the users of information (i.e., the people). Navigating through this period of time can be uncertain and worrisome, but people must battle against untruths. One consequence of the post-truth era is that people today should be fact-checkers. Fact-checking, as the term suggests, is the activity and process of verifying factual information to ensure veracity and correctness. Fact-checking may well be one of the 21st century skills that people must adapt. Fact-checking is important, because as discussed, disinformation, misinformation, mal-information, and the spread of fake news can sway one's opinion or belief. For some, this means a violation of their healthcare rights for not having the access to verified information. For others, it means curtailing informed choices and decisions because of propaganda. In any case, the consequences of their fact-checking are felt and concretized in real-world ecosystems of people interacting on an everyday basis. In the words of Ressa, in her acceptance speech for the prestigious Woodrow Wilson Award at the Princeton University in 2022, "online violence is real-world violence." Anything that happens in the online world has a repercussion in the physical world. If people will believe that lies are facts as a result of robust disinformation systems online, this can metastasize into real-world harms, such as bullying, verbal aggression, and even physical attacks. This disinformation crisis is very alarming that we cannot live another day letting untruths slip away. On a personal level, fact-checking can be our response to the moral and ethical dilemmas of the post-truth era. The following are some helpful information from the Benedictine University (2022) on how to spot fake news and fact-check accordingly: 1. **Check credentials** - Is the author an expert in the field that the article is concerned with? Do they currently work in that field? You may check their public profiles, such as LinkedIn or Google Scholar, or do a quick Google search to determine whether the author can speak about the subject with authority and accuracy. 2. **Read the "About Us"** - Does the website where the news article is published have an "About Us" section? Sometimes, this section is on a tab at the top of the website or appears as a link at the bottom. Remember, all reputable websites have this section and will even provide a way for you to contact them through email address or phone number. 3. **Look for bias** - Does the news or article lean toward a particular point of view? Does it link to external sources, such as websites, files, or images, that are heavily skewed towards that point of view? Take note, biased articles may not be giving you the whole story. 4. **Check the dates** - Like fresh produce, information is verified only as far as they are up-to-date. Information can have an expiration date, and the goal is to use the most updated version that you can find. 5. **Check out the source** - When an article cites sources, it will help to link to those and check them. Be wary though because official sounding associations, sometimes, are just biased think-tanks or represent only a portion of a rather large group of people. If sources are not available, read as much about the topic as you can to gather as much information as there is out there, which can help you decide whether the information presented in the news or article is free or not. 6. **Interrogate URLs** - Domain manipulation has become a practice as well by those intending to disinform. For example, a website can look like an .edu domain, but followed by a .co may likely be a fake or deceptive site. If you come across a variable of a well-known URL, do a little investigating. Remember, disinformation spreaders would want to make the source look credible, and one way to do that is to make it look like the truly credible ones. 7. **Suspect the sensational** - The more sensational a news or article is, the more skeptical you should be. Responsible journalism is neither biased nor sensational and aims to present the story and all its sides as they are.