Stock image and media content provider Getty Images announced that it was suing Stability AI, the developers of the text-to-image deep learning model, via a release on its website on Tuesday. Stable Diffusion makes it possible to create art and other kinds of visual media by entering text-based prompts. Getty Images noted that while it did provide licenses to tech firms that respected property rights, Stability AI did not fall under this category. "Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests," said the image company. The legal proceedings were commenced in the High Court of Justice in London.
It all started with an obscure article in an obscure journal, published just as the last AI winter was beginning to thaw. In 2004, Andreas Matthias wrote an article with the enigmatic title, "The responsibility gap: Ascribing responsibility for the actions of learning automata." In it, he highlighted a new problem with modern AI systems based on machine learning principles. Once, it made sense to hold the manufacturer or operator of a machine responsible if the machine caused harm, but with the advent of machines that could learn from their interactions with the world, this practice made less sense. Learning automata (to use Matthias' terminology) could do things that were neither predictable nor reasonably foreseeable by their human overseers.
GDL is a subfield of deep learning (Goodfellow et al., Reference Goodfellow, Bengio and Courville2016) with a focus on generation of new data. Following the definition provided by Foster (Reference Foster2019), a generative model describes how a dataset is generated (in terms of a probabilistic model); by sampling from this model, we are able to generate new data. Nowadays, machine-generated artworks have entered the market (Vernier et al., Reference Vernier, Caselles-Dupré and Fautrel2020), they are fully accessible online,Footnote 1 and they have the focus of major investments.Footnote 2 Ethical debates have, fortunately, found a place in the conversation (for an interesting summary of machine learning researches related to fairness, see Chouldechova and Roth (Reference Chouldechova and Roth2020)) because of biases and discrimination they may cause (as happened with AI Portrait Ars [O'Leary, Reference O'Leary2019], leading to some very remarkable attempts to overcome them, as in Xu et al. (Reference Xu, Yuan, Zhang and Wu2018) or Yu et al. (Reference Yu, Li, Zhou, Malik, Davis and Fritz2020)). In this context, it is possible to identify at least three problems: the use of protected works, which have to be stored in memory until the end of the training process (even if not for more time, in order to verify and reproduce the experiment); the use of protected works as training set, processed by deep learning techniques through the extraction of information and the creation of a model upon them; and the ownership of intellectual property (IP) rights (if a rightholder would exist) over the generated works. Although these arguments have already been extensively studied (e.g., Sobel (Reference Sobel2017) examines use as training set and Deltorn and Macrez (Reference Deltorn and Macrez2018) discuss authorship), this paper aims at analyzing all the problems jointly, creating a general overview useful for both the sides of the argument (developers and policymakers); aims at focusing only on GDL, which (as we will see) has its own peculiarities, and not on artificial intelligence (AI) in general (which contains too many different subfields that cannot be generalized as a whole); and is written by GDL researchers, which may help provide a new and practical perspective to the topic.
Cybersecurity is a critical issue in today's digital age, as cybercriminals continue to find new ways to infiltrate our systems and steal sensitive information. As the threat of cybercrime looms, it's becoming increasingly clear that traditional cybersecurity methods are no longer enough. But there's hope on the horizon: Artificial Intelligence (AI) is revolutionizing the way we think about cybersecurity and defend against cybercrime. One of the biggest benefits of AI in cybersecurity is its ability to detect and respond to threats in real-time. We all know that traditional cybersecurity methods rely on pre-defined rules and signatures to identify and block malicious activity. But as cybercriminals continue to evolve and find new ways to evade detection, it's becoming clear that this approach is no longer enough.
The image above was created via Stable Diffusion with the prompt "lawyers in suits fighting robots with lasers in a futuristic, superhero style." Looks like Matthew Butterick and the Joseph Saveri Law Firm are going to have a busy year! The same folks who filed the class action against GitHub and Microsoft related to Copilot and Codex a couple of months ago, have filed another one against Stability AI, DeviantArt, and Midjourney related to Stable Diffusion. The crux of the complaint is around Stability AI and their Stable Diffusion product, but Midjourney and DeviantArt enter the picture because they have generative AI products that incorporate Stable Diffusion. DeviantArt also has some claims lobbed directly at them via a subclass because they allowed the nonprofit, Large-Scale Artificial Intelligence Open Network's (LAION), to incorporate the art work submitted to their service into a large public dataset of 400 million images and captions.
I have a grayish dual position regarding generative art and, well, basically, generative creativity. One view is extremely cynical, and the other perspective is hopeful. I wrote earlier about this topic here (note: a bit gloomy). Let me start with the cynical view, hyperbolized for ease of communication. I see this as a big tech effort to lower tech wages, reduce negotiation positions of creative workers, push the commoditization of art, create a new scaleable consumer market, and more holistically drive society towards transhumanism.
Getty Images claims that Stability AI, the creator of Stable Diffusion, used images obtained from Getty Images to train their algorithms without obtaining proper licensing. This definition applies to photographs, art, and images that the artists allege have been infringed. First, let's discuss how the above-mentioned artificial intelligence models work. In general, deep neural networks and machine learning models are trained in a method that has its similarities to humans learning. Programmers do not instruct the algorithm to specifically do what it does, or in this case, do not get the algorithms to copy specific elements from original pictures when constructing a new image.
The world of video analytics has come a long way in the past few years. What started as a complementary security surveillance technology, has evolved into a critical decision-making solution for stakeholders beyond law enforcement and public safety. Powered by AI and deep learning, today's sophisticated video analytics have far-reaching and impactful applications, from accelerating investigations for criminal or commercial claims to increasing operational productivity across industries and end users, delivering cost efficiency, enhanced safety, and elevated experiences. These applications only continue to gain strength, and in this article, I'll walk you through some examples of diverse industries innovatively supporting operational and business decision making with the power of data-driven intelligence derived from video analytics. But first, a quick word on how it works: Video intelligence software detects and extracts objects in video, identifies each object based on trained Deep Neural Networks, and classifies each object to enable intelligent video analysis through search and filtering, alerting, data aggregation, and visualisation capabilities.
OPPO is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. The US base salary range for this full-time position is $30-$60/hour. Our salary ranges are determined by role, level, and location.
Whether computers can actually "think" and "feel" is a question that has long fascinated society. Alan M. Turing introduced a test for gauging machine intelligence as early as 1950. Movies such as 2001: A Space Odyssey and Star Wars have only served to fuel these thoughts, but while the concept was once confined to science fiction, it is rapidly emerging as a serious topic of discussion. In a few cases, the dialog has become so convincing that people have deemed machines sentient. A recent example involves former Google data scientist Blake Lemoine, who published human-to-machine discussions with an AI system called LaMDA.a