Goto

Collaborating Authors

Information Technology


What not to share with ChatGPT if you use it for work

Mashable

The question is no longer "What can ChatGPT do?" It's "What should I share with it?" Internet users are generally aware of the risks of possible data breaches, and the ways our personal information is used online. But ChatGPT's seductive capabilities seem to have created a blind spot around hazards we normally take precautions to avoid. OpenAI only recently announced a new privacy feature which lets ChatGPT users disable chat history, preventing conversations from being used to improve and refine the model. "It's a step in the right direction," said Nader Henein, a privacy research VP at Gartner who has two decades of experience in corporate cybersecurity and data protection.


Asus plans to sell first managed AI service hosted at client facilities

The Japan Times

Taiwan's Asustek Computer plans to introduce one of the first services that lets companies tap into the potential of generative artificial intelligence while keeping control over their data. The novelty of the Taipei-based firm offering, called AFS Appliance, is that all of the hardware will be installed at the client's own facilities -- to maintain security and control. The AI computational platform, built on Nvidia Corp.'s chip technology, will be operated and updated with new data by Asustek, also known as Asus. A major concern around services like OpenAI is that they're operated through online data centers that can expose sensitive information. Samsung Electronics Co. banned employees from using OpenAI's ChatGPT after it found workers had uploaded sensitive code to the platform.


NVIDIA's G-Sync ULMB 2 aims to minimize motion blur in games

Engadget

NVIDIA has revealed G-Sync Ultra Low Motion Blur (ULMB) 2, the second generation of tech it designed to minimize motion blur in competitive games. Compared with ULMB, which it released in 2015, the company says the latest version offers nearly twice as much brightness, along with almost no crosstalk -- the strobing or double-image effect that sometimes appears when blur reduction features are enabled. Motion clarity is largely determined by the monitor's pixel response time. To improve matters, NVIDIA is using "full refresh rate backlight strobing," which builds on the backlight strobing technique from the original ULMB. Although the previous version of the tech improved motion clarity for many, it needed to switch off the monitor's backlight 75 percent of the time.


NVIDIA's generative AI lets gamers converse with NPCs

Engadget

NVIDIA has unveiled technology called Avatar Cloud Engine (ACE) that would allow gamers to speak naturally to non-playable characters (NPCs) and receive appropriate responses. The company revealed the tech during its generative AI keynote at Computex 2023, showing a demo called Kairos with a playable character speaking to an NPC named Jin in a dystopic-looking Ramen shop. The demo (below in 32:9, the widest widescreen I've ever seen) shows the player carrying on a conversation with Jin. "Hey Jin, how are you," the person asks. "Unfortunately, not so good," replies Jin. "How come?" " I am worried about the crime around here.


Nvidia unveils new kind of Ethernet for AI, Grace Hopper 'Superchip' in full production

ZDNet

Nvidia CEO Jensen Huang showed off the first iteration of Spectrum-X, the Spectrum-4 chip, with one hundred billion transistors in a 90-millimeter by 90-millimeter die. Nvidia CEO Jensen Huang, offering the opening keynote of the Computex computer technology conference, on Monday in Taipei, Taiwan, unveiled a host of new products, including a new kind of ethernet switch dedicated to moving high-volumes of data for artificial intelligence tasks. "How do we introduce a new ethernet, that is backward compatible with everything, to turn every data center into a generative AI data center?" "For the very first time we are bringing the capabilities of high performance computing into the ethernet market," said Huang. The Spectrum-X, as the family of ethernet is known, is "the world's first high-performance ethernet for AI," according to Nvidia.


NVIDIA's next DGX supercomputer is all about generative AI

Engadget

NVIDIA CEO Jensen Hiang made a string of announcements during his Computex keynote, including details about the company's next DGX supercomputer. Given where the industry is clearly heading, it shouldn't come as a surprise that the DGX GH200 is largely about helping companies develop generative AI models. The supercomputer uses a new NVLink Switch System to enable 256 GH200 Grace Hopper superchips to act as a single GPU (each of the chips has an Arm-based Grace CPU and an H100 Tensor Core GPU). This, according to NVIDIA, allows the DGX GH200 to deliver 1 exaflop of performance and to have 144 terabytes of shared memory. The company says that's nearly 500 times as much memory as you'd find in a single DGX A100 system.


Fukui launches Japan's first transport service using 'level 4' autonomous driving

The Japan Times

Such services are expected to become a new means of public transit in regions facing population decline. In Eiheiji, where level 4 autonomous driving was approved for the first time in the country, a seven-seater electric cart developed by the National Institute of Advanced Industrial Science and Technology and others runs on a section of a walking trail spanning about 2 kilometers. There is no operator in the cart, and one person in charge of remote monitoring manages up to three such electric carts. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites.


'They're afraid their AIs will come for them': Doug Rushkoff on why tech billionaires are in escape mode

The Guardian

It was a tough week in tech. The top US health official warned about the risks of social media to young people; tech billionaire Elon Musk further trashed his reputation with the disastrous Twitter launch of a presidential campaign; and senior executives at OpenAI, makers of ChatGPT, called for the urgent regulation of "super intelligence". But to Doug Rushkoff – a leading digital age theorist, early cyberpunk and professor at City University of New York – the triple whammy of rough events represented some timely corrective justice for the tech barons of Silicon Valley. And more may be to come as new developments in tech come ever thicker and faster. "They're torturing themselves now, which is kind of fun to see. They're afraid that their little AIs are going to come for them. They're apocalyptic, and so existential, because they have no connection to real life and how things work. They're afraid the AIs are going to be as mean to them as they've been to us," Rushkoff told The Guardian in an interview.


'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon

FOX News

Texas residents share how familiar they are with artificial intelligence on a scale from one to 10 and detailed how much they use it each day. The so-called "godfather of AI" continues to warn about the dangers of artificial intelligence weeks after he quit his job at Google. In a recent interview with NPR, Geoffrey Hinton said there was a "serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control." He asserted that politicians and industry leaders need to think about what to do regarding that issue right now. No longer science fiction, Hinton cautioned that technological advancements are a serious problem that is probably going to arrive very soon.


The Tricky Business of Computing Ethical Values

Slate

An expert in computing responds to Tara Isabella Burton's "I Know Thy Works." In 2018 researchers from the Massachusetts Institute of Technology Media Lab, Harvard University, the University of British Columbia, and Université Toulouse Capitole shared the results of one of the largest moral experiments conducted to date. They recorded 40 million ethical decisions from millions of people across 233 countries. The experiment's "Moral Machine" posed to users variations of the classic trolley problem, imagining instead the trolley as a self-driving car. Should the car swerve and collide with jaywalking pedestrians or maintain its current trajectory, which would yield inevitable doom for the passengers inside?