In this post, we look at how we can automate the detection of anomalies in a manufactured product using Amazon Lookout for Vision. Using Amazon Lookout for Vision, you can notify operators in real time when defects are detected, provide dashboards for monitoring the workload, and get visual insights from the process for business users. Amazon Lookout for Vision is a machine learning (ML) service that spots defects and anomalies in visual representations using computer vision (CV). With Amazon Lookout for Vision, manufacturing companies can increase quality and reduce operational costs by quickly identifying differences in images of objects at scale. Defect and anomaly detection during manufacturing processes is a vital step to ensure the quality of the products. The timely detection of faults or defects and taking appropriate actions is important to reduce operational and quality-related costs. According to Aberdeen's research, "Many organizations will have true quality-related costs as high as 15 to 20 percent of sales revenue, in extreme cases some going as high as 40 percent." Manual inspection, either in-line or end-of-line, is a time-consuming and expensive task.
Companies across the globe are exposed to a variety of risks. While some of them can be identified and avoided through strategic planning, others can not be even tracked. One of these dangers is a product recall, which normally occurs after a product or a service has been released, thereby adding huge costs to the company and irreversible damages for many. Fujitsu, a Japanese firm, recently developed an AI system capable of highlighting irregularity in the product's appearance to detect associated issues at an earlier stage, thereby providing the chance to correct them before the product is released in the market. The AI technology will be used for image inspection, which will allow for the extremely detailed identification of a wide range of external abnormalities on manufactured objects, such as scratches and production errors.
American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration's shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2
Between discarded packaging, shipping fees, inventory shortages and damaged merchandise, returns cost retailers a fortune. In 2017, retail returns cost roughly $350 billion, and they were projected to rise to $550 million in 2020. While some returns are inevitable (with reasons ranging from defective products to late deliveries), others are entirely avoidable. In fact, 46% of shoppers surveyed in Narvar's 2019 The State of Online Returns report said their No. 1 reason for returning products was incorrect size, fit or color. Only 3% intentionally bought multiple items knowing they'd return some.
The reputation and bottom line of a company can be adversely affected if defective products are released. If a defect is not detected, and the flawed product is not removed early in the production process, the damage can be costly – and the higher the unit value, the higher those costs will be. And worst of all, dissatisfied customers can demand returns. To mitigate these costs, many manufacturers install cameras to monitor their products as they move along their production lines. However, the data obtained may not always be useful – or more appropriately said, the data is useful, but existing machine vision systems may not be able to accurately assess it at full production speeds.
Confidence in the regulatory environment is crucial to enable responsible AI innovation and foster the social acceptance of these powerful new technologies. One notable source of uncertainty is, however, that the existing legal liability system is unable to assign responsibility where a potentially harmful conduct and/or the harm itself are unforeseeable, yet some instantiations of AI and/or the harms they may trigger are not foreseeable in the legal sense. The unpredictability of how courts would handle such cases makes the risks involved in the investment and use of AI difficult to calculate with confidence, creating an environment that is not conducive to innovation and may deprive society of some benefits AI could provide. To tackle this problem, we propose to draw insights from financial regulatory best practices and establish a system of AI guarantee schemes. We envisage the system to form part of the broader market-structuring regulatory frameworks, with the primary function to provide a readily available, clear, and transparent funding mechanism to compensate claims that are either extremely hard or impossible to realize via conventional litigation. We propose it to be at least partially industry-funded. Funding arrangements should depend on whether it would pursue other potential policy goals aimed more broadly at controlling the trajectory of AI innovation to increase economic and social welfare worldwide. Because of the global relevance of the issue, rather than focusing on any particular legal system, we trace relevant developments across multiple jurisdictions and engage in a high-level, comparative conceptual debate around the suitability of the foreseeability concept to limit legal liability. The paper also refrains from confronting the intricacies of the case law of specific jurisdictions for now and—recognizing the importance of this task—leaves this to further research in support of the legal system’s incremental adaptation to the novel challenges of present and future AI technologies. This article appears in the special track on AI and Society.
Anomaly detection consists in identifying, within a dataset, those samples that significantly differ from the majority of the data, representing the normal class. It has many practical applications, e.g. ranging from defective product detection in industrial systems to medical imaging. This paper focuses on image anomaly detection using a deep neural network with multiple pyramid levels to analyze the image features at different scales. We propose a network based on encoding-decoding scheme, using a standard convolutional autoencoders, trained on normal data only in order to build a model of normality. Anomalies can be detected by the inability of the network to reconstruct its input. Experimental results show a good accuracy on MNIST, FMNIST and the recent MVTec Anomaly Detection dataset
How law regulates Artificial Intelligence (AI)? How do we ensure AI applications comply with existing legal rules and principles? Is new regulation needed and if yes, what type of regulation? These questions have gained increasing importance as AI deployment has increased across various sectors in our societies. Adopting new technological solutions has raised legislators' concern for the protection of fundamental rights both nationally in Finland and at the EU level. However, finding these answers is not easy. And the answers we find may be frustrating: varying from typical "it depends" to the self-evident "it's complicated", followed by the slightly more optimistic "we don't know yet".
Justpoint, a New York-based startup that uses Artificial Intelligence for analysis of individual medical malpractice claims, has now secured $1 million in a seed funding round. Justpoint is founded by Victor Bornstein. It is the AI-first medical malpractice company offering consumers and law firms a way of understanding the legal merits of a claim as well as an instant prediction of the likely settlement amount. Harry Langenberg of Optima Tax Relief, said, "Justpoint has identified a big inefficient market in medical claims and malpractice that is ripe for disruption. Leveraging their deep experience in healthcare and technology, they have put together a brilliant team of engineers and scientists to turn their vision into reality. Their ability to leverage technologies such as AI, machine learning, and predictive analytics will add tremendous efficiencies and cut wasteful processes across the value chain, improving payouts and transparency for consumers and reducing search times and costs for law firms".
Would you entrust a personal-injury claim, divorce settlement or high-stakes contract to an algorithm? A growing number of apps and digital services are betting you will, attracting millions of Silicon Valley investment dollars but raising questions about the limits and ethics of technology in the legal sphere. Among the leaders in the emergent robo-lawyering field is DoNotPay, an app dreamed up by Joshua Browder in 2015, when he was a 17-year-old Stanford University student, to help friends dispute parking tickets. The app, which relies on an artificial intelligence-enabled chatbot, became popular, and has expanded its focus to other consumer legal services. In June it hit the million-case mark, helping save people upward of $30 million since it started, Mr. Browder says. It raised a new $12 million round of funding the same month.