public good
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Yolo County > Davis (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
- Information Technology > Game Theory (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Yolo County > Davis (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
- Information Technology > Game Theory (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
I Teach Computer Science, and That Is Not All
"I teach computer science, and that is all," wrote Boaz Barak, of Harvard University, in a recent op-ed in The New York Times.a The main point of the op-ed was to protest the growing politicization of U.S. higher education, especially at elite universities, where we have seen many faculty members proceed from scholarship to advocacy. But in spite of the provocative title, the content of Barak's op-ed is quite more nuanced. "We should not normalize bringing one's ideology to the classroom," wrote Barak, and I could not agree more. But he also wrote that "The interaction of computer science and policy sometimes arises in my classes, and I make sure to present multiple perspectives." Here, Barak is advocating fairness and balance, rather than neutrality and avoidance of non-technical topics.
Elon Musk drags OpenAI into federal court
Elon Musk has filed another lawsuit against OpenAI and the company's CEO Sam Altman, two months after withdrawing a previous one. Musk once again alleges that OpenAI breached its founding commitments by putting commercial concerns ahead of the public good. This time around, though, the suit has been filed in federal court rather than in a state court. That's because the new filing alleges that OpenAI violated federal racketeering laws by conspiring to defraud Musk, according to his lawyer, Marc Toberoff. "The previous suit lacked teeth -- and I don't believe in the tooth fairy," Toberoff told The New York Times. "This is a much more forceful lawsuit."
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The End Is Not Clear
In his January 2023 Communications Viewpoint, "The End of Programming," Matt Welsh wrote "nobody actually understands how large AI models work." However, already no one person understands existing large computer systems. Indeed, no team of people understands them. Staff turnover and other practicalities of real life mean not even the team that wrote them originally (should it still exist) nor the team currently responsible for maintaining them, fully understands large software systems, which can now exceed a billion lines of code. And yet such systems are in worldwide daily use and deliver economic benefits.
- North America > United States > Washington > King County > Seattle (0.05)
- North America > United States > Texas > Harris County > Houston (0.05)
- North America > United States > Michigan > Wayne County > Detroit (0.05)
- (3 more...)
- Information Technology > Artificial Intelligence (0.58)
- Information Technology > Software (0.52)
The Tech Investment We Should Make Now to Avoid A.I. Disaster
There's good reason to fear that A.I. systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don't really exist. These risks may be the fallout of a world where businesses deploy poorly tested A.I. systems in a battle for market share, each hoping to establish a monopoly. A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it.
- Health & Medicine (1.00)
- Government (0.96)
AI Desperately Needs Global Oversight
Every time you post a photo, respond on social media, make a website, or possibly even send an email, your data is scraped, stored, and used to train generative AI technology that can create text, audio, video, and images with just a few words. This has real consequences: OpenAI researchers studying the labor market impact of their language models estimated that approximately 80 percent of the US workforce could have at least 10 percent of their work tasks affected by the introduction of large language models (LLMs) like ChatGPT, while around 19 percent of workers may see at least half of their tasks impacted. In other words, the data you created may be putting you out of a job. When a company builds its technology on a public resource--the internet--it's sensible to say that that technology should be available and open to all. But critics have noted that GPT-4 lacked any clear information or specifications that would enable anyone outside the organization to replicate, test, or verify any aspect of the model.
- North America > United States (0.25)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.05)
- Asia > India (0.05)
- Government (0.50)
- Energy > Power Industry > Utilities > Nuclear (0.31)
ACM, Ethics, and Corporate Behavior
Everyone in computing is promoting ethics these days. The Vatican has issued the Rome Call for AI Ethics, which has been endorsed by many organizations, including tech companies. Facebook (now Meta) has donated millions of U.S. dollars to establish a new Institute for Ethics in Artificial Intelligence at the Technical University of Munich, since "ensuring the responsible and thoughtful use of AI is foundational to everything we do."a Google announced it "is committed to making progress in the responsible development of AI."b And last, but not least, ACM now requires nominators and endorsers of ACM award candidates attest that "To the best of my knowledge, the candidate … has not committed any action that violates the ACM Code of Ethics and ACM's Core Values."
- Europe > Holy See (0.25)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.25)
- North America > United States > Texas > Harris County > Houston (0.05)
- North America > United States > California (0.05)
AI Ethics
This past year has seen a significant blossoming of discussions on the ethics of AI. In working groups and meetings spanning IEEE, ACM, U.N. and the World Economic Forum as well as a handful of governmental advisory committees, more intimate breakout sessions afford an opportunity to observe how we, as robotics and AI researchers, communicate our own relationship to ethics within a field teeming with possibilities of both benefit and harm. Unfortunately, many of these opportunities fail to realize authentic forward progress during discussions that repeat similar memes. Three common myths pervade such discussions, frequently stifling any synthesis: education is not needed; external regulation is undesirable; and technological optimism provides justifiable hope. The underlying good news is that discourse and curricular experimentation are now occurring at scales that were unmatched in the recent past.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Ireland > Munster > County Cork > Cork (0.04)
- Education (0.47)
- Banking & Finance (0.35)
A New Report on Ethical AI, An Older Post about AI Ethics Traps, and Some Hopes
Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. In June, the Pew Research Center released a report titled "Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade." It details responses from "[s]ome 602 technology innovators, developers, business and policy leaders, researchers and activists" to a question that the research center posed (in a collaboration with the Imagining the Internet Center at Elon University). The authors of the report are careful to note that it was "a nonscientific canvassing, based on a nonrandom sample," and that the results "represent only the opinions of the individuals who responded to the queries and are not projectable to any other population." Those important qualifications got lost in some of the media coverage of the report, however.