Einstein and Anime: Hong Kong University Tests AI Professors

Students at the Hong Kong University of Science and Technology use virtual reality headsets in class. Peter PARKS / AFP
Students at the Hong Kong University of Science and Technology use virtual reality headsets in class. Peter PARKS / AFP
TT
20

Einstein and Anime: Hong Kong University Tests AI Professors

Students at the Hong Kong University of Science and Technology use virtual reality headsets in class. Peter PARKS / AFP
Students at the Hong Kong University of Science and Technology use virtual reality headsets in class. Peter PARKS / AFP

Using virtual reality headsets, students at a Hong Kong university travel to a pavilion above the clouds to watch an AI-generated Albert Einstein explain game theory.
The students are part of a course at the Hong Kong University of Science and Technology (HKUST) that is testing the use of "AI lecturers" as the artificial intelligence revolution hits campuses around the world, AFP said.
The mass availability of tools such as ChatGPT has sparked optimism about new leaps in productivity and teaching, but also fears over cheating, plagiarism and the replacement of human instructors.
Professor Pan Hui, the project lead for HKUST's AI project, is not worried about being replaced by the tech and believes it can actually help ease what he described as a global shortage of teachers.
"AI teachers can bring in diversity, bring in an interesting aspect, and even immersive storytelling," Hui told AFP.
In his "Social Media for Creatives" course, AI-generated instructors teach 30 post-graduate students about immersive technologies and the impact of digital platforms.
These instructors are generated after presentation slides are fed into a program. The looks, voices and gestures of the avatars can be customized, and they can be displayed on a screen or VR headsets.
This is mixed with in-person teaching by Hui, who says the system frees human lecturers from the "more tedious" parts of their job.
For student Lerry Yang, whose PhD research focuses on the metaverse, the advantage of AI lecturers was in the ability to tailor them to individual preferences and boost learning.
If the AI teacher "makes me feel more mentally receptive, or if it feels approachable and friendly, that erases the feeling of distance between me and the professor", she told AFP.
- 'Everybody's doing it' -
Educators around the world are grappling with the growing use of generative AI, from trying to reliably detect plagiarism to setting the boundaries for the use of such tools.
While initially hesitant, most Hong Kong universities last year allowed students to use AI to degrees that vary from course to course.
At HKUST, Hui is testing avatars with different genders and ethnic backgrounds, including the likenesses of renowned academic figures such as Einstein and the economist John Nash.
"So far, the most popular type of lecturers are young, beautiful ladies," Hui said.
An experiment with Japanese anime characters split opinion, said Christie Pang, a PhD student working with Hui on the project.
"Those who liked it really loved it. But some students felt they couldn't trust what (the lecturer) said," she said.
There could be a future where AI teachers surpass humans in terms of trustworthiness, Hui said, though he said he preferred a mix of the two.
"We as university teachers will better take care of our students in, for example, their emotional intelligence, creativity and critical thinking," he said.
For now, despite the wow factor for students, the technology is far from the level where it could pose a serious threat to human teachers.
It cannot interact with students or answer questions and like all AI-powered content generators, it can offer false, even bizarre answers -- sometimes called “hallucinations".
In a survey of more than 400 students last year, University of Hong Kong professor Cecilia Chan found that respondents preferred humans over digital avatars.
"(Students) still prefer to talk to a real person, because a real teacher would provide their own experience, feedback and empathy," said Chan, who researches the intersection of AI and education.
"Would you prefer to hear from a computer 'Well done'?"
That said, students are already using AI tools to help them learn, Chan added.
"Everybody's doing it."
At HKUST, Hui's student Yang echoed that view: "You just can't go against the advancement of this technology."



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.