2023 will be the year of artificial intelligence, but there are challenges
In spite of the fact that artificial intelligence has been present in our lives for many years, the year 2022 served as a major proving ground for this technology. Through ChatGPT, AI art generation and Hollywood's embrace of AI, AI gained a new kind of foothold–and hype–with the general public. However, it was also associated with a new wave of ethical and privacy concerns.
Despite all that 2022 did to raise the profile of the technology, Northeastern University AI experts predict that 2023 will also be a significant year for AI--but it will also bring its own challenges.
According to Usama Fayyad, executive director of Northeastern's Institute for Experiential AI, AI's trajectory was not solely determined by the hype surrounding the technology. Throughout 2022, the public profile of AI grew, as did the misunderstandings and misinterpretations surrounding it.
There were definitely significant contributions in demonstrating what AI could do, but they didn't adequately explain the limitations and what can be done," Fayyad said.
Recent discussions surrounding artificial intelligence have been characterized by a tension between fear that technology will automate human jobs and a more realistic understanding that these tools will augment, not replace, human capabilities. In 2023, it will be more important than ever to educate the public about the potential and limitations of artificial intelligence.
In that discussion [about ChatGPT], it is very unfortunate that we lose sight of how much of this is dependent on the human being in the loop," says Fayyad. "It's not about automating humans out of the loop; it's about bringing people into the loop in a proper manner."
However, this does not mean that AI will not have an impact on specific areas of society. It is understandable why people in higher education, recruitment, and creative fields feel threatened by the technology, according to Fayyad.
Teachers are already concerned about ChatGPT and its ability to allow students to have Open AI's chatbot write entire essays for them within seconds. Fayyad predicts that new tools will be available to combat this kind of cheating in 2023, which is a new kind of cheating that could be undetectable. Princeton University students have already developed an app that detects whether an essay has been written using ChatGPT.
According to Fayyad, there will be a significant increase in technologies for countermeasures and detection of incidents of this nature in the future.
Fayyad and Rahul Bhargava, assistant professor of art, design, and journalism at Northeastern, agree that technology must be incorporated into education, not merely banned.
According to Bhargava, new ways of writing will be developed. As a professor, am I concerned about this? Yes, but we are already using AI with our students here in the journalism department. We are trying that stuff and we are figuring it out, and we will figure it out."
In 2023, artificial intelligence may act as a catalyst for educators to reexamine their practices.
In addition to a two-page written essay that a computer can generate, do you have another artifact that you can use to demonstrate that you learned [something]?" Bhargava asks. "Why should I want students to regurgitate information? That is not an indication of learning."
According to Bhargava, it is less important whether AI will replace human jobs in 2023 than more crucial ethical questions that need to be addressed. As a result of tools like ChatGPT being designed by teams with limited perspectives and diversity, the result is a tool that is lacking in perspective. The more pressing concern is "who makes these things and what questions are they asking about what biases are baked into them."
Bhargava asserts that the systems that are built reflect our culture and our practices. "Which way do they point and who is looking at them? Embedding bias does not occur; it is reflected in them."
During the process of training the AI, ChatGPT uses human data labelers to minimize biases and increase accuracy, according to Dakuo Wang, associate professor of art and design and computer science at Kwansei University.
It is important to remember, however, that technology is only as good as the data that it is trained on. A lack of accurate data makes inaccuracies and limitations much more evident--and potentially dangerous.
In Wang's opinion, cases such as ChatGPT will help the public, private industry and the research community to gain a better understanding of the significance of data.
What is the source of the data, and how can it be converted into a format that can be used to fine-tune or jumpstart their own version of the model?" Wang asks. It will become increasingly important to pay attention to that part."
Even though efforts are being made to reduce bias in these technologies, AI remains a dangerous technology, particularly for law enforcement agencies and prisons, particularly for minority populations in the U.S. Bhargava states that what is new in 2023 is that these technologies are beginning to affect the majority.
Consequently, Northeastern's experts anticipate that AI laws and regulations will begin to develop in 2023--even if they develop at a slower pace than the technology itself. Earlier this year, New York City passed legislation that restricts the use of artificial intelligence in hiring practices, which will go into effect at the beginning of 2023.
In terms of regulatory issues, it may not be appropriate to discuss them now, however we are now seeing definite maturation and acceleration in areas such as privacy, unfair use of AI, and determining who is responsible if an AI algorithm develops unintended bias or intended bias," Fayyad asserts.
Src: Northeastern University
Comments ()