This paper builds a Near-Field Communication (NFC) based localization system that allows ordinary surfaces to locate surrounding objects with high accuracy in the near-field. While there is rich prior work on device-free localization using far-field wireless technologies, the near-field is less explored. Prior work in this space operates at extremely small ranges (a few centimeters), leading to designs that sense close proximity rather than location. We propose TextileSense, a near-field beamforming system that can track everyday objects made of conductive materials (for example, a human hand) even if they are a few tens of centimeters away. We use multiple flexible NFC coil antennas embedded in ordinary and irregularly shaped surfaces we interact with in smart environments--furniture, carpets, and so forth. We design and fabricate specialized textile coils woven into the fabric of the furniture and easily hidden by acrylic paint. We then develop a near-field blind beam-forming algorithm to efficiently detect surrounding objects, and use a data-driven approach to further infer their location. A detailed experimental evaluation of TextileSense shows an average accuracy of 3.5 cm in tracking the location of objects of interest within a few tens of centimeters from the furniture. This paper seeks to build a Near-Field Communication (NFC) MIMO beamforming system that can accurately localize objects, with or without NFC capability in the near-field. While there has been rich prior work on the device and device-free localization in the far-field, for instance, using technologies such as Bluetooth,1 mm-wave,5 ultrasound,3 RFID,15 and visible light,16 much less exploration exists in the near-field. However, near-field technologies have significant advantages that are worth exploring: (1) their shorter range raises less privacy implications compared to the far-field counterparts; and (2) technologies such as NFC are ubiquitous in our smartphones as well as battery-free everyday objects (for example, credit cards, ID cards, and so on).
Within the next five years, the way we work, live, play, and learn will be changed by digital humans (chatbots and avatars with very realistic human faces). Digital humans are already gaining popularity as social media influencers, and they will soon evolve into digital sales assistants, fashion advisers, and personal shoppers able to model how customers will look and move in the latest ensembles. Digital humans will become central to the multibillion-dollar fashion industry, as social media is further integrated into the retail customer experience. Digital humans will also help in healthcare, enabling medical students and social workers to develop better interview skills for patients in sensitive clinical settings. They will allow people, especially those with mental health challenges, to rehearse for job interviews. They will help keep elderly people connected to their communities and respectfully monitored so they can remain in their homes longer. They will provide a human face for personalized advice, support, and training--and do it at scale. This has become possible with the advent of cost-effective, highly realistic, personalized interactive digital agents and avatars sporting high-fidelity facial simulations powered by advances in both real-time neural rendering (NR) and low-latency computing. NR refers to the use of machine-learning (ML) techniques to generate digital faces or face replacements in video.17 NR rose to prominence with the advent of so-called "deep fakes"--the replacement of someone's face in videos with an NR-generated face of remarkable realism. The term originates from the name of a Reddit user (/u/deepfakes), a ML engineer who posted the original deep fake auto-encoder. Often used for satire, deep fakes can be harmful, presenting novel ethical issues. The best-known examples involve deep fakes of celebrities, a form of face "hijacking" whereby publicly available videos of a person are used to train an ML program that overlays the source person's face onto existing video footage; this technique was originally used in pornographic material.
Automated reasoning refers to a set of tools and techniques for automatically proving or disproving formulas in mathematical logic.35 It has many applications in computer science--for example, questions about the existence of bugs or security vulnerabilities in hardware or software systems can often be phrased as logical formulas, or verification conditions, whose validity can then be proved or disproved using automated reasoning techniques, a process known as formal verification.15,26 When successful, formal verification can guarantee freedom from certain kinds of design errors, an outcome that is otherwise extremely difficult to achieve. Driven by such potential benefits, the past couple of decades have seen a dramatic improvement in the performance and capabilities of automated reasoning tools, with a corresponding explosion of use cases, including formal verification, automated test-case generation, program analysis, program synthesis, and many more.5,37,38 These applications rely crucially on automated reasoning tools producing correct results.
The conversational interface is an idea that is forever on the cusp of transforming the world. The potential is undeniable: Everyone has innate, untapped conversational expertise. We could do away with the nested menus required by visual interfaces; anything the user can name is immediately at hand. We could turn natural language into a declarative scripting language and operating systems into integrated development environments (IDEs). Reality, however, has not lived up to this potential.
Rising attention about generative AI prompts the question: Are we witnessing the birth of a new innovation platform? The answer seems to be yes, though it remains to be seen how pervasive this new technology will become. To have an innovation platform, there must be a foundational technology, such as a widely adopted personal computer or smartphone operating system, or the Internet and cloud-computing services with application programming interfaces (APIs) (see "The Cloud as an Innovation Platform for Software Development," Communications, October 2019). Third parties are then needed to access these APIs and start creating complementary products and services. More applications attract more users, which leads to more applications and then more users, and usually improvements in the foundational technology.
For more than a decade, computational scientist Juan R. Perilla of the University of Delaware has been working to digitally reconstruct a very particular structure of the human immunodeficiency virus (HIV). Perilla and his colleagues set out to create an active three-dimensional digital model of the virus shell, or capsid, that researchers could study and probe as if they were working with an actual particle. The processing power required to build the simulation was significant, according to Perilla, because the model needed to track how a change in one area would impact the interactions of all two million atoms in the particle. Perilla and his group succeeded in constructing the model and demonstrating various means of testing the simulation to ensure it behaves as it would in the real world. "You can actually interrogate the simulated particle, pushing and pulling on the capsid as if you were testing the actual physical system," Perilla says.
Gary Marcus: Two Models of AI Oversight -- and How Things Could Go Deeply Wrong https://bit.ly/3Qnxd9A June 12, 2023 Originally published on The Road to AI We Can Trust (http://bit.ly/3juuD3j) The Senate hearing that I participated in a few weeks ago (https://bit.ly/44QxHt1) I was thrilled by what I saw of the Senate that day: genuine interest and genuine humility. Senators acknowledged that they were too slow to figure out what do about social media, that the moves were made then, and that there was now a sense of urgency.