The space projects have been dominated by government bodies until we saw the ambitious companies such as SpaceX and Blue Origin diving into this diverse area. These two are the most prominent names in the private space community and are often put on a face-off due to the similarity of its founders in other areas as well. Owned by two of the most powerful businessmen of all time -- Elon Musk and Jeff Bezos, they have been on the competition radar for their interest in the area of autonomous vehicles. Similarly, in the space segment, while the two companies might look quite similar in its attempts to explore space, the ideology and the approach of these companies vary quite significantly. But one thing cannot be denied that they both are developing large, reusable vehicles capable of carrying people and satellites across space. While we have often heard about SpaceX's missions and launches over the past few years, Blue Origin does not come out to be so ambitious in gaining traction.
Decision-making on numerous aspects of our daily lives is being outsourced to machine-learning (ML) algorithms and artificial intelligence (AI), motivated by speed and efficiency in the decision process. ML approaches—one of the typologies of algorithms underpinning artificial intelligence—are typically developed as black boxes. The implication is that ML code scripts are rarely scrutinised; interpretability is usually sacrificed in favour of usability and effectiveness. Room for improvement in practices associated with programme development have also been flagged along other dimensions, including inter alia fairness, accuracy, accountability, and transparency. In this contribution, the production of guidelines and dedicated documents around these themes is discussed. The following applications of AI-driven decision-making are outlined: (a) risk assessment in the criminal justice system, and (b) autonomous vehicles, highlighting points of friction across ethical principles. Possible ways forward towards the implementation of governance on AI are finally examined.
XAOS MOTORS, headquartered in KOREA, challenges the technological progress of autonomous driving. XAOS MOTORS, by launching XCAT LiDAR Sensor now, give OEMs to make fully self-driving cars earlier than the market expected. MEMS LiDAR Sensor XCAT was developed for self-driving cars. With the ability to scan over 300 meters, XCAT can safely cope with high-speed driving. XCAT is designed for mass production, and OEMs can adopt high-performance 3D LiDARs at a low cost.
For all the advances enabled by artificial intelligence, from speech recognition to self-driving cars, AI systems consume a lot of power and can generate high volumes of climate-changing carbon emissions. A study last year found that training an off-the-shelf AI language-processing system produced 1,400 pounds of emissions--about the amount produced by flying one person roundtrip between New York and San Francisco. The full suite of experiments needed to build and train that AI language system from scratch can generate even more: up to 78,000 pounds, depending on the source of power. But there are ways to make machine learning cleaner and greener, a movement that has been called "Green AI." Some algorithms are less power-hungry than others, for example, and many training sessions can be moved to remote locations that get most of their power from renewable sources.
The following was issued as a joint release from the MIT AgeLab and Toyota Collaborative Safety Research Center. How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations? These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg. Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like human perception, perceive the driving environment as a continuous flow of visual information. "In sharing this dataset, we hope to encourage researchers, the industry, and other innovators to develop new insight and direction into temporal AI modeling that enables the next generation of assisted driving and automotive safety technologies," says Bryan Reimer, principal researcher.
Few issues are as important to businesses today than sustainability. Because the modern consumer cares about the environment, companies need to meet higher expectations about eco-friendly practices. Supply chains, in particular, have a lot of room to improve. It's no secret that logistics chains aren't exactly eco-friendly. They account for more than 80% of carbon emissions globally. The modern business world can't exist without supply chains, but the natural world won't exist in the same way if they don't improve. The good news is there's an . . .
Edge computing can roughly be defined as the practice of processing and storing data either where it's created or close to where it's generated -- "the edge" -- whether that's a smartphone, an internet-connected machine in a factory or a car. The goal is to reduce latency, or the time it takes for an application to run or a command to execute. While that sometimes involves circumventing the cloud, it can also entail building downsized data centers closer to where users or devices are. Anything that generates a massive amount of data and needs that data to be processed as close to real time as possible can be considered a use case for edge computing: think self-driving cars, augmented reality apps and wearable devices. Edge computing can roughly be defined as the practice of processing and storing data either where it's created or close to where it's generated -- "the edge" -- whether that's a smartphone, an internet-connected machine in a factory or a car.
The world came together to build 5G. Now the next-generation wireless technology is pulling the world apart. The latest version of the 5G technical specifications, expected Friday, adds features for connecting autonomous cars, intelligent factories, and internet-of-things devices to crazy-fast 5G networks. The blueprints reflect a global effort to develop the technology, with contributions from more than a dozen companies from Europe, the US, and Asia. And yet, 5G is also pulling nations apart--with the US and China anchoring the tug-of-war.
When China restricted the importation of recyclable waste products in 2018, many western companies turned to robotic technologies to strengthen their processing capabilities. "The ban exposed how vulnerable the current infrastructure for recycling is," says Chris Wirth, vice-president of marketing and business development for AMP Robotics, a Denver-based industrial recycling artificial intelligence company. To recycle in a cost-effective, comprehensive and safe way, goods must be broken down into their constituent commodities to be sold on, in a process that has been likened to "unscrambling an egg". Roboticists think that computer vision, neural networks and modular robotics can enable a more intelligent, flexible approach to recycling. AI-enabled robotics can identify items based on visual cues such as logos, colour, shape and texture, sorting them and taking them apart.