Goto

Collaborating Authors

 sterling


"Clipped," Reviewed: A Romp Back Through an N.B.A. Racism Scandal

The New Yorker

One upshot of the current glut of streaming platforms is a flood of programming to fill them: something for every attention span, something to plug every potential gap of viewer inactivity that might render a certain streaming service irrelevant while some other service pulls ahead. And so stories get told and retold. The romantic comedies begin to feel the same. The dating reality shows rely (often successfully, it must be said) on the same dramatic tricks. Another consequence of this, for better or worse, is that the stories being told are pulling from more immediate memory.


STERLING: Self-Supervised Terrain Representation Learning from Unconstrained Robot Experience

Karnan, Haresh, Yang, Elvin, Farkash, Daniel, Warnell, Garrett, Biswas, Joydeep, Stone, Peter

arXiv.org Artificial Intelligence

Terrain awareness, i.e., the ability to identify and distinguish different types of terrain, is a critical ability that robots must have to succeed at autonomous off-road navigation. Current approaches that provide robots with this awareness either rely on labeled data which is expensive to collect, engineered features and cost functions that may not generalize, or expert human demonstrations which may not be available. Towards endowing robots with terrain awareness without these limitations, we introduce Self-supervised TErrain Representation LearnING (STERLING), a novel approach for learning terrain representations that relies solely on easy-to-collect, unconstrained (e.g., non-expert), and unlabelled robot experience, with no additional constraints on data collection. STERLING employs a novel multi-modal self-supervision objective through non-contrastive representation learning to learn relevant terrain representations for terrain-aware navigation. Through physical robot experiments in off-road environments, we evaluate STERLING features on the task of preference-aligned visual navigation and find that STERLING features perform on par with fully supervised approaches and outperform other state-of-the-art methods with respect to preference alignment. Additionally, we perform a large-scale experiment of autonomously hiking a 3-mile long trail which STERLING completes successfully with only two manual interventions, demonstrating its robustness to real-world off-road conditions.


C-3PO Style Humanoid Robots Thrive From Surge in AI Development

#artificialintelligence

A collateral beneficiary of the feverish pace of generative artificial intelligence development appears to be the humanoid robot. A Norwegian company called 1X Technologies, formerly Halodi Robotics, which describes itself as a manufacturer and inventor of androids, recently attracted $23.5 million in a round of funding led by the OpenAI Startup Fund -- the same OpenAI that got the AI snowball rolling with its ChatGPT generative AI bot. "1X is at the forefront of augmenting labor through the use of safe, advanced technologies in robotics," Brad Lightcap, OpenAI's COO and manager of the OpenAI Startup Fund, said in a statement. "The OpenAI Startup Fund believes in the approach and impact that 1X can have on the future of work." With the funds, 1X said it intends to accelerate the development of its bipedal android model NEO and expand manufacturing of its first commercially available wheel-based android, EVE, in Norway and North America.


No data scientist? No problem: How low-code AI platforms like Akkio can help

#artificialintelligence

There will be more than 1,000 elections in the United States in 2022 at the state and higher level. And as of June 30, 2022, six fundraising committees associated with the Democratic and Republican parties have reported raising a combined $1.3 billion. Raising and spending that money effectively for campaigns is where specialist firms like Sterling Data Company enter the game. Sterling is a national Democratic political data firm focused on fundraising. Whatever your political preferences, Sterling's use of artificial intelligence is instructive for pretty much any organization looking to gain competitive advantage.


The Strange, Unfinished Saga of Cyberpunk 2077

The New Yorker

Mike Pondsmith started playing Dungeons & Dragons in the late seventies, as an undergraduate at the University of California, Davis. The game, published just a few years before, popularized a newish form of entertainment: tabletop role-playing, in which players, typically using dice and a set of rule books, create characters who pursue open-ended quests within an established world. "The most stimulating part of the game is the fact that anything can happen," an early D&D review noted. Soon, other such games hit the market, including Traveller, a sci-fi game published in 1977, the year that "Star Wars" came out. Pondsmith, a tall Black man who grew up in multiple countries because his dad was in the Air Force, loved sci-fi, and fancied himself a bit like Lando Calrissian, the smooth-talking "Star Wars" rogue played by Billy Dee Williams.


Part human, part machine: is Apple turning us all into cyborgs?

The Guardian

At the beginning of the Covid-19 pandemic, Apple engineers embarked on a rare collaboration with Google. The goal was to build a system that could track individual interactions across an entire population, in an effort to get a head start on isolating potentially infectious carriers of a disease that, as the world was discovering, could be spread by asymptomatic patients. Delivered at breakneck pace, the resulting exposure notification tool has yet to prove its worth. The NHS Covid-19 app uses it, as do others around the world. But lockdowns make interactions rare, limiting the tool's usefulness, while in a country with uncontrolled spread, it isn't powerful enough to keep the R number low. In the Goldilocks zone, when conditions are just right, it could save lives.


UCI to host two-day conference on artificial intelligence

#artificialintelligence

EVENT: The UCI Forum for the Academy and the Public will host a two-day conference on "The Future of the Future: The Ethics and Implications of AI." Keynote speaker Bruce Sterling, an award-winning science fiction author, and an interdisciplinary and international panel of writers, academics and communicators will tackle an advance in technology that touches our everyday lives: artificial intelligence. INFORMATION: All events are free and open to the public, but please RSVP here. Visitor parking is available in the Student Center Parking Structure (grid D5 on campus map) and in the Mesa Parking Structure (grid D3 on campus map) for $13 per day or $2 per hour. Media planning to attend should contact Pat Harriman at 949-824-9055 or pharrima@uci.edu. BACKGROUND: The UCI Forum for the Academy and the Public is a collaborative project of the literary journalism program, the School of Humanities and the School of Law that bridges the university and the public via conferences and pop-ups that take on the most pressing issues of our time.


Before You Hand Human Resources Over to AI ...

#artificialintelligence

As the business world grapples with the potential of AI and machine learning, new ethical challenges arise on a regular basis related to its use. One area where tensions are being played out is in talent management: between relying on human expertise or in deferring decisions to machines so as to better understand employee needs, skills and career potential. Companies like IBM, with a workforce of 350,000, are at the forefront of employing new technologies and techniques including machine learning and AI to help recruit and retain the right kinds of workers. When IBM CEO Ginni Rometty was recently interviewed at a work and talent summit, she relayed the full extent to which the company is committed to using AI to realize these goals. IBM is no stranger to the AI space, having built up considerable expertise (and market share) with its powerful Watson AI service.


Stanford professor: Don't let artificial intelligence pick your employees

#artificialintelligence

Implicit in his comment is the notion that, someday, these systems will be ready. But work by Adina Sterling, an assistant professor of organizational behavior at Stanford Graduate School of Business, questions this optimism, linking it to a deep–and deeply problematic–misconception of hiring's strategic role. In a new paper coauthored with Daniel W. Elfenbein of Washington University in St. Louis and published in Strategy Science, Sterling articulates how smart hiring is inextricable from long-term corporate strategy; she also explains why delegating the responsibility of hiring to machines, at least in the near future, is likely to undermine its strategic potential. "With technology increasingly stepping into this role, we're at a moment in which these questions of higher-level strategy ought to be of great importance," she says. The use of machines in hiring became widespread roughly a quarter-century back, when career platforms like Monster.com emerged on the web.


Don't Let Artificial Intelligence Pick Your Employees

#artificialintelligence

In 2014, Amazon launched a new recruitment algorithm to help it find the best job candidates. A year into the experiment, the company saw that the tool was biased against women and quietly shut the program down. When Reuters broke the story last October, John Jersin, the product leader for LinkedIn Talent Solutions, offered his thoughts on the general landscape of algorithmic hiring: "I certainly would not trust any AI system today to make a hiring decision on its own," he said. "The technology is just not ready yet." Implicit in his comment is the notion that, someday, these systems will be ready.