Dartmouth College Scientists Create Smart Tablecloths

The Microsoft logo at the International Cybersecurity Forum (FIC) in Lille, Jan. 28, 2020. (AFP Photo)
The Microsoft logo at the International Cybersecurity Forum (FIC) in Lille, Jan. 28, 2020. (AFP Photo)
TT
20

Dartmouth College Scientists Create Smart Tablecloths

The Microsoft logo at the International Cybersecurity Forum (FIC) in Lille, Jan. 28, 2020. (AFP Photo)
The Microsoft logo at the International Cybersecurity Forum (FIC) in Lille, Jan. 28, 2020. (AFP Photo)

A team of researchers at Dartmouth College, working with Microsoft, has developed a contact-sensitive object-recognition technique called Capacitivo for creating smart tablecloths.

In their paper published Monday on the ACM digital library site, the group describes their technique and how well the prototype they built worked when tested.

Over the past decade, attempts have been made by several companies to create personal electronics for integration in smart clothes. To date, most such efforts to merge electronics with fabrics have focused on fabrics that are meant to be worn. In this new effort, the researchers have switched their focus to fabrics used to make other products, such as tablecloths and furniture coverings.

Their idea was to make such surfaces aware of what has been placed on them and then use that information to provide a service. Setting a variety of fruits on a table covered with a smart tablecloth could, for example, allow an associated device such as a smartphone or smart-speaker to suggest different meals that could be prepared using that fruit.

The researchers note that prior efforts by others to make similar products were based on creating fabrics that could recognize metallic objects. With their effort, they have developed a technique that works for non-metallic objects such as food and liquids.

Their technique involves weaving a grid of electrodes into a cloth attached to a textile substrate. The integrated sensors detect changes in the capacitance of electrodes as they are affected by the presence of an object. The cloth is then attached to a deep learning system and trained to recognize objects.

The researchers tested their idea by creating a 12-by-12-inch tablecloth prototype which they attached to a laptop running the deep learning system. As pieces of fruit were placed on the prototype, the system would analyze how they impacted the tablecloth and display the name of the fruit on the screen. After multiple tests, the researchers found the system to be 94.5 percent accurate. They suggest that such a system could be used for a wide variety of purposes, including reminding users of objects they have left behind on a table and assistance with planning meals.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.