Technology experts predict the rate of adoption of artificial intelligence and machine learning will skyrocket in the next two years. These advanced technologies will spark unprecedented business gains, but along the way enterprise leaders will be called to quickly grapple with a smorgasbord of new ethical dilemmas. These include everything from AI algorithmic bias and data privacy issues to public safety concerns from autonomous machines running on AI. Because AI technology and use cases are changing so rapidly, chief information officers and other executives are going to find it difficult to keep ahead of these ethical concerns without a roadmap. To guide both deep thinking and rapid decision-making about emerging AI technologies, organizations should consider developing an internal AI ethics framework.
Huawei Technologies has officially unleashed its artificial intelligence (AI) chip Ascend 910, which it says has a maximum power consumption of just 310W--lower than its originally planned specs of 350W. The chip is touted to have "more computing power than any other AI processor", delivering 256 teraflops at half-precision floating point (FP16) and 512 teraflops for integer precision calculations. The Chinese tech giant also announced the commercial availability of its MindSpore AI computing framework, which it said was designed to ease the development of AI applications and improve the efficiencies of such tools. Huawei said the AI framework handled only gradient and model data that already had been processed, so user privacy could be maintained. The platform also had "built-in protection technology" to keep AI models secured.
MOSCOW – In a new setback for Moscow, an unmanned spacecraft carrying Russia's first humanoid robot to be sent into orbit failed to dock automatically at the International Space Station on Saturday. "Russian cosmonauts issued a command to abort the automated approach of an uncrewed Russian Soyuz spacecraft to the International Space Station," the U.S. space agency NASA said in a statement. "The craft was unable to lock onto its target at the station," and "backed a safe distance away from the orbital complex while the Russian flight controllers assess the next steps," NASA said. Russian flight controllers had told the ISS crew it appeared the problem that prevented automated docking was in the station and not the Soyuz spacecraft, NASA added. Moscow news agencies quoted the flight center control as saying the Soyuz craft had to retreat to a "secure distance" from the ISS.
Fedor, which is travelling to the orbital outpost aboard the Soyuz MS-14 spacecraft, was created by Russia's Android Technology Company and the Advanced Research Fund on a technical assignment from Moscow's Emergencies Ministry. Its basic goals include transmitting telemetry data, determining parameters related to the flight safety, including overloads, and carrying out experiments to test the Skybot's operations capabilities on spacewalks outside the ISS. A Soyuz-2.1a carrier rocket blasted off from the Gagarin Start launch pad of the Baikonur spaceport in Kazakhstan yesterday delivering the Soyuz MS-14 spacecraft with Fedor into the near-Earth orbit.
The UK Government has invested $28 million in several high-tech farming projects, which are aimed at cutting down pollution, minimizing waste and producing more food. The investment is part of the Government's modern Industrial Strategy, for which the UK has committed to boost R&D spending to 2.4 percent of GDP by 2027. The projects include Warwickshire-based Rootwave, which will use a $875,000 grant to use electricity instead of chemicals to kill weeds from the roots, avoiding damage to crops. Tuberscan, in Lincolnshire, will use $496,000 to develop ground penetrating radar, underground scans and artificial intelligence (AI) to monitor potato crops and identify when they are ready to harvest. The government hopes the technology will increase the usable crop by an estimated 5 to 10%, as well as reducing food waste with minimal additional costs.
With glass interior walls, exposed plumbing and a staff of young researchers dressed like Urban Outfitters models, New York University's AI Now Institute could easily be mistaken for the offices of any one of New York's innumerable tech startups. For many of those small companies (and quite a few larger ones) the objective is straightforward: leverage new advances in computing, especially artificial intelligence (AI), to disrupt industries from social networking to medical research. But for Meredith Whittaker and Kate Crawford, who co-founded AI Now together in 2017, it's that disruption itself that's under scrutiny. They are two of many experts who are working to ensure that, as corporations, entrepreneurs and governments roll out new AI applications, they do so in a way that's ethically sound. "These tools are now impacting so many parts of our everyday life, from healthcare to criminal justice to education to hiring, and it's happening simultaneously," says Crawford.
Scientists have created a robot that may be able to help the elderly perform tasks amid a shortage of nurses in the UK. Named Baxter, it has two arms and 3D printed'fingers', allowing it to step in when a person is struggling with things such as getting dressed. Artificial intelligence allows the robot to detect when assistance is needed and learn about the owners difficulties over time. When it's ready for use in healthcare settings, it could help free up the time of staff so they can do other work. There are around 40,000 nurse vacancies in NHS England, which is expected to double after Brexit, according to figures.
The UK government has developed a voracious appetite for artificial intelligence (AI), based on a promise of its apparently transformative power across myriad industries. From prime minister Boris Johnson's pledge to fund a £250m AI lab for the NHS, to the Department for Education's recently launched'AI horizon scanning group', AI is being lauded as a panacea to some of the most pressing issues society faces. Education is just one of the sectors that is meeting AI with open arms. As Matthew Jones at Perlego argued for this title, the opportunities being presented for AI to close educational accessibility gaps is exciting. In fact, educators, policymakers and investors are all being bombarded with messages related to AI's seemingly endless benefits in the classroom.
OpenAI, the AI company that Elon Musk founded and then quit has just released a more powerful version of its AI text-writing software. The company still won't release their full software - that can be used to write fake news and messages en masse - due to fears it might be misused. OpenAI says its text-writing system is so advanced it can write news stories and even fiction that passes as human. A user can feed the system text - anything from a few sentences to pages of it - and the system will then continue that same text in an uncannily well-written, contextually relevant, human style. However, after releasing its original system, GPT-2, in February, the company said the full software was too dangerous to release to the public - a weaker version was made available.