Collaborating Authors

Sleep Staging Using End-to-End Deep Learning Model


Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages) deep learning model for sound-based sleep staging designed to work with audio from microphone chips, which are essential in mobile devices such as modern smartphones. Patients and Methods: Two different audio datasets were used: audio data routinely recorded by a solitary microphone chip during polysomnography (PSG dataset, N 1154) and audio data recorded by a smartphone (smartphone dataset, N 327). The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement from ambient noise. The proposed neural network model learns to first extract features from each 30-second epoch and then analyze inter-epoch relationships of extracted features to finally classify the epochs into sleep stages. Results: Our model achieved 70% epoch-by-epoch agreement for 4-class (wake, light, deep, REM) sleep stage classification and robust performance across various signal-to-noise conditions. The model performance was not considerably affected by sleep apnea or periodic limb movement. Conclusion: The proposed end-to-end deep learning model shows potential of low-quality sounds recorded from microphone chips to be utilized for sleep staging. Future study using nocturnal sounds recorded from mobile devices at home environment may further confirm the use of mobile device recording as an at-home sleep tracker. Sound-based sleep staging can be a potential candidate for non-contact home sleep trackers. However, existing works were limited to audio measured with a contact manner (ie, tracheal sounds), with a limited distance (ie, 25 cm), or by a professional microphone. For convenience, a more practical way is to utilize easily obtainable audio, such as sounds recorded from commercial mobile devices.

Becoming an 'AI Powerhouse' Means Going All In


There are plenty of organizations that are dabbling with AI, but relatively few have decided to go all in on the technology. One that is decidedly on that path is Mastercard. Employing a combination of acquisitions and internal capabilities, Mastercard has the clear objective of becoming an AI powerhouse. Just what does that term mean, and how is it being applied at the company? Some refer to the idea of aggressive, pervasive adoption of AI as being "AI first." Others use the term "AI fueled" or "all in on AI" (that's Tom's favorite, since it's the title of his forthcoming book on the subject).

Why Giving "Human Rights" to AI Is a Bad Idea


In a recent Living in the Solutionpodcast with otolaryngologist and broadcaster Elaina George at Liberty Talk radio, Wesley J. Smith, lawyer and host of the Humanize podcast at Discovery Institute's Center on Human Exceptionalism tackled the question of "Can You be a Christian and Believe in Transhumanism?" (June 4, 2022) Transhumanism or H, as it is sometimes called, is a movement to create immortality through new biotechnology or merger with artificial intelligence (AI). In the first portion of the podcast, which we covered on Sunday, June 12, they talked about the way being a human, a computer, or an animal is viewed by transhumanists as all just a choice now, thanks to new technology. In the second, they looked at the religious elements in transhumanism. In this third and final segment, they discuss the difference in values between Christianity and transhumanism. A partial transcript and notes follow.

The Future of Work


The 1st of May is celebrated as International Labor Day, as it historically marks the relentless struggle of the working class to get the workday reduced to 8 hours and the workweek to 40 hours (Al Jazeera, 2019). The history of International Labor Day is rooted in the struggle for freedom and rights. It was initially called the "day of demonstrations," as peaceful protests for the demand of reducing working hours by workers in Chicago were countered by violence by the state. It also led to the sentencing to death of revolutionary leaders, who were tried only because of their political beliefs, without any evidence linking them to violence. Although this movement for labor rights started in the West, it soon reached other parts of the globe as well, where non-Western countries like India, Bangladesh, and Pakistan also initiated similar demonstrations to support better labor rights and opportunities.

Software engineer creates AI that identifies anonymous faces in WWII photos


In a story originally reported by The Times of Israel, a software engineer in New York has created and developed an AI that scans through hundreds of thousands of photos to help identify victims and survivors of the Holocaust. From Number to Names (N2N), is an artificially intelligent facial recognition platform that can scan through photos from prewar Europe and the Holocaust (e.g. Daniel Patt, a 40-year-old software engineer now working for Google, works on the project in his own free time with his own resources according to the article but is being joined by a growing team of engineers, researchers, and data scientists. According to the United States Holocaust Memorial Museum (USHMM) website, there is no single list identifying the victims and survivors of the Holocaust, and that research to find individuals' stories is a long process following leads on minimal information. The museum does, however, offer various ways onsite for the families of survivors and victims seeking information and documentation.

Fun AI Apps Are Everywhere Right Now. But a Safety 'Reckoning' Is Coming


If you've spent any time on Twitter lately, you may have seen a viral black-and-white image depicting Jar Jar Binks at the Nuremberg Trials, or a courtroom sketch of Snoop Dogg being sued by Snoopy. These surreal creations are the products of Dall-E Mini, a popular web app that creates images on demand. Type in a prompt, and it will rapidly produce a handful of cartoon images depicting whatever you've asked for. More than 200,000 people are now using Dall-E Mini every day, its creator says--a number that is only growing. A Twitter account called "Weird Dall-E Generations," created in February, has more than 890,000 followers at the time of publication.

Three opportunities of Digital Transformation: AI, IoT and Blockchain


Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).

Has artificial intelligence (AI) come alive like in sci-fi movies? This Google engineer thinks so


If you have ever interacted with a chatbot you know we're still years away from those things convincing you that you are chatting with a real human. That's no surprise as many chatbots do not actually use machine learning to converse more naturally. Instead only completing scripted actions based on keywords. A good chatbot that truly utilises machine learning can fool you into thinking that you're talking to a human. In fact, a program from 1965 fooled people into thinking that it was a human.

Riot Games will monitor 'Valorant' voice chat to combat disruptive players


Abusive Valorant players could soon have their verbal tirades come back to haunt them. In a blog post published on Friday, Riot Games outlined a plan to begin monitoring in-game voice chat as part of a broader effort to combat disruptive behavior within its games. On July 13th, the studio will begin collecting voice data from Valorant games played in North America. According to Riot, it will use the data to get its AI model "in a good enough place for a beta launch later this year." During this initial stage, Riot says it won't use voice evaluation for disruptive behavior reports.