Goto

Collaborating Authors

Sensing and Signal Processing


Pushing the Envelope in Pulmonary Image Analysis with Machine Learning and AI

#artificialintelligence

Pulmonology is a field of medicine that deals with respiratory tract diseases, and the medical imaging used by pulmonologists is predominantly chest imaging: CXR, CT, MRI, PET, V/Q scanning, ultrasound, and the like. High-quality medical image analysis is crucial in pulmonary diagnostics and treatment. While the most conventional method of assessing lung tissue and surrounding structures is computed tomography, other types are used for additional insights and to accommodate individual contraindications. When acquiring a medical image, radiologists have to strike the balance between quality and the permissible degree of exposure for the patient. This is especially critical when a person has to get multiple chest scans in a short period of time.


10 Ways AI Is Improving Construction Site Security

#artificialintelligence

The better the construction site's real-time safety and security monitoring, the more flexible ... [ ] construction processes become. Bottom Line: AI and machine learning are reducing construction site accidents, theft, vandalism and hazardous operating conditions by analyzing 24/7 video feeds in real-time, gaining new predictive insights and contextual intelligence into threats. According to the National Equipment Register, construction theft losses often exceed $1B a year. The latest model equipment, tools and supplies are the most stolen and the least likely to recover. Only 25% of stolen construction equipment is recovered.


Agfa launches its SmartXR Assistant

#artificialintelligence

Agfa announced the launch of its SmartXR portfolio at RSNA, being held virtually. SmartXR uses a unique combination of hardware and AI-powered software to lighten radiographers' workloads and provide image acquisition support. This newest member of Agfa's DR portfolio offers key assistance during the radiology routine, which has proven to be very important during the COVID-19 crisis, as well as beyond. The SmartXR portfolio brings intelligence to digital radiography (DR) equipment at the point of care. Integrated sensors and cameras combined with powerful AI software, 3D machine vision, deep learning and machine intelligence, support the radiographer with first-time-right image acquisition.


On image recognition software, AI, and patents - Innovation Origins

#artificialintelligence

I find them incredibly irritating. Those images you have to click on to prove that you are not a robot. If you are just one click away from a nice weekend away, you first have to figure out where you can see the traffic lights on 16 tiny fuzzy squares. Google makes grateful use of these puzzling attempts. For one thing, the company uses artificial intelligence to train its image recognition software.


2021 Complete Computer Vision Bootcamp, Zero-Hero in Python

#artificialintelligence

This Course is will teach you Computer Vision and Image Processing Techniques From Basic to Advance Level. This Course Provide all high quality content to learn and become Industry level Expert. We worked Really hard to explain the concepts of Computer Vision and Image Processing and the necessary mathematics behind each concept. You will get a Clear Idea about how computer understand and work with images and video Data. We will Start with a Short Python course where you will learn to code in python and will have clear understanding of python syntax and some advance concepts like python generators along with Object Oriented Programming.


Interview with Ionut Schiopu – ICIP 2020 award winner

AIHub

Ionut Schiopu and Adrian Munteanu received a Top Viewed Special Session Paper Award at the IEEE International Conference on Image Processing (ICIP 2020) for their paper "A study of prediction methods based on machine learning techniques for lossless image coding". Here, Ionut Schiopu tells us more about their work. The research topic of our paper is to introduce a more efficient algorithm for lossless image compression based on Machine Learning (ML) techniques, where the main objective is to minimize the amount of data required to represent the input image without any information loss. In recent years, a new research strategy for coding has emerged by exploring the advances brought by modern ML techniques by proposing novel hybrid coding solutions where specific modules in conventional coding frameworks are replaced with more efficient modules based on ML techniques. The paper follows this research strategy and uses a deep neural network to replace the prediction module in the conventional coding framework.


Computer Vision: A Key Concept to Solve Many Image Data Problems

#artificialintelligence

This article was published as a part of the Data Science Blogathon. Computer Vision is evolving from the emerging stage and the result is incredibly useful in various applications. It is in our mobile phone cameras which are able to recognize faces. It is available in self-driving cars to recognize traffic signals, signs, and pedestrians. Also, it is in industrial robots to monitor problems and navigating around co-workers.


Nvidia's StyleGAN2: Analyzing and Improving the Image Quality of StyleGAN

#artificialintelligence

Nvidia launches its upgraded version of StyleGAN by fixing artifacts features and further improves the quality of generated images. StyleGAN being the first of its type image generation method to generate very real images was launched last year and open-sourced in February 2019. StyleGAN2 redefines state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality. According to the research paper, In StyleGAN2, several methods and characteristics are improved, and changes in both model architecture and training methods are addressed.


The more and less of electronic-skin sensors

Science

Electronic skins (e-skins) are flexible electronic devices that emulate properties of human skin, such as high stretchability and toughness, perception of stimuli, and self-healing. These devices can serve as an alternative to natural human skin or as a human-machine interface ([ 1 ][1]–[ 3 ][2]). For on-skin applications, an e-skin should be multimodal (sense more than one external stimulus), have a high density of sensors, and have low interference with natural skin sensation. On pages 961 and 966 of this issue, You et al. ([ 4 ][3]) and Lee et al. ([ 5 ][4]), respectively, report advances of skin-like electronic devices. You et al. present a stretchable multimodal ionic-electronic (IE) conductor–based “IEM-skin” that can measure both strain and temperature inputs without signal interference. Lee et al. describe an ultrathin capacitive pressure sensor based on conductive and dielectric nanomesh structures that can be attached to a human fingertip for grip pressure and force measurement without affecting natural skin sensation. The human skin contains a large number of mechanoreceptors and thermoreceptors (nerve endings that sense deformation and temperature, respectively) that provide distinct perception of the spatial distributions of strain and temperature on our skin induced by touch stimulations ([ 6 ][5]). To replicate these sensory functions of the natural skin, different types of sensors that act as artificial receptors are integrated onto an e-skin for multimodal sensation ([ 7 ][6]). However, an e-skin containing a high-density array of sensory “pixels” of different types for sensing different physical quantities tends to have a complex structure and is challenging to manufacture. A preferred strategy for realizing multimodal sensation on an e-skin is to use the same sensory unit for detecting different physical quantities without signal interference, an approach called decoupled multimodal sensing. Traditional stretchable sensors are sensitive to both strain and temperature and cannot be used as artificial multimodal receptors without signal interference. Targeting interference-free strain and temperature sensing by a single sensory unit, You et al. creatively used the ion relaxation dynamics of an ion conductor (an elastomer mixed with an ionic liquid) to decouple the strain and temperature measurement and developed the IEM-skin composed of an array of artificial multimodal ionic receptors. They fabricated the IEM-skin by sandwiching a thin layer of ion conductor with two layers of orthogonally patterned stretchable electrode strips (see the figure, top ). A pixelated matrix of millimeter-sized artificial receptors formed between the top and bottom electrodes. The electrical properties of each receptor are affected by the externally applied strain and temperature stimuli and can be measured through impedance measurement. You et al. used a strain-independent intrinsic electrical parameter of the ion conductor, the charge relaxation time, which reflects the ionic charge dynamics of the ion conductor and is equal to the ratio of material's dielectric constant and ion conductivity ([ 8 ][7], [ 9 ][8]). The charge relaxation time is the signal readout for temperature and is not affected by the deformation of the IEM-skin. For strain measurement, the bulk capacitance of the ion conductor is measured. The effect of temperature on the capacitance is eliminated through normalization against a reference capacitance at the temperature measured by the receptor. Thus, an external strain input only changes geometric parameters of the ion conductor, whereas a temperature input primarily modulates the intrinsic electrical properties (dielectric constant and ion conductivity) of the ion conductor. Another enabling factor of the IEM-skin design is its emulation of the epidermis and dermis bilayer of the human skin by suspending the receptor matrix layer over a low-friction interface layer filled with talcum powder. This design allows three-dimensional wrinkle-like deformations of the IEM-skin under different contact modes (such as shear, pinch, tweak, and torsion) and permits the IEM-skin to distinguish these contact modes through the measured temperature and strain profiles. Data confirm that the IEM-skin can perform decoupled, real-time measurement of strain and temperature with high accuracy. The IEM-skin can serve as a human-machine interface that accepts tactile inputs of different contact modes and can be integrated into prosthetic and robotic devices to provide tactile and thermal feedback with high spatial resolution. The concept of using intrinsic electrical parameters, such as conductivity and dielectric constant of sensing materials, for strain-independent temperature sensing can be generalized to developing other types of stretchable multimodal sensors for humidity, chemicals, and biomolecules. One limitation is that the method for recognizing different tactile input modes through the measured temperature and strain profiles only works for interactions with hot or cold objects at temperatures different from that of the IEM-skin. Alternative solutions may include the use of learning-based recognition models purely based on strain-distribution data or modulation of the temperature of the IEM-skin (by adding a heating layer) based on the environment. Skin-like electronic sensors also hold great potential for construction of hand-wearing devices such as instrumented gloves for quantifying tactile signals like force and pressure during finger or in-hand manipulation ([ 10 ][9]). Such data could facilitate the decoding of human hand sensation and its roles in object manipulation and enable better designs of robotic and prosthetic hands with biomimetic sensory feedback ([ 11 ][10]). Targeting imperceptible wearing and tactile sensing on fingertips, Lee et al. developed an ultrathin capacitive pressure sensor consisting of multilayers of conductive and dielectric nanomesh structures. This sensor design is derived from the design of conductive nanomesh electrodes proposed by Miyamoto et al. ([ 12 ][11]), which can be directly laminated on human skin during fabrication. The electrode is fabricated by first electrospinning a water-soluble polymer, polyvinyl alcohol (PVA) into a multilayered mesh-like network of 300- to 500-nm-wide nanofibers. A 100-nm-thick gold layer is then deposited onto the PVA nanomesh sheet, and the gold-coated nanomesh sheet is transferred onto the skin surface. The sacrificial PVA nanofibers are washed off by water, but a residual layer of the dissolved PVA greatly facilitates the attachment of the resultant gold nanomesh layer onto the textured skin surface with excellent adhesion and conformal contact. The skin-integrated nanomesh electrode is stretchable and highly breathable and has exceptionally low bending stiffness, and so it creates no mechanical constraint or dermatological irritation to the skin. To fabricate a nanomesh pressure sensor (see the figure, bottom), Lee et al. first laminated a nanomesh electrode on the skin surface and then sequentially attached a dielectric nanomesh layer made of electrospun polyurethane and parylene nanofibers and another nanomesh electrode layer to form a parallel-plate capacitor structure. Then, a nanomesh passivation layer of polyurethane nanofibers was attached to the top electrode layer with dissolved PVA nanofibers as the filler and adhesive. The total thickness of the nanomesh pressure sensor is ∼13 μm. When fingers wearing such a pressure sensor grip an object, the grip force applied to the pressure sensor deforms the middle dielectric nanomesh layer and leads to a change in the capacitance measured between the top and bottom electrodes as the sensor readout. ![Figure][12] Improved electronic skins Two goals in artificial touch sensors are to sense more than one stimulus with one receptor and to create wearable sensors that maintain natural skin sensation. GRAPHIC: C. BICKEL/ SCIENCE Through object-gripping experiments performed by human participants, Lee et al. investigated the effect of the finger-integrated pressure sensor on the natural fingertip sensation and found no decrease of the sensory feedback caused by the attachment of the pressure sensor. They hypothesized that the ultrathin and compliant structure of the nanomesh pressure sensor renders the device imperceptible on the fingertip. In addition, the intimate and conformal adhesion of the sensor's bottom nanomesh electrode layer to the skin surface may also contribute to the negligible interference of the finger skin sensation by the sensor attachment. This sensor also shows excellent mechanical durability under cyclic compression, shearing, and surface friction, which is attributed to the high mechanical robustness of the multilayered nanomesh structure of the pressure sensor. This work highlights another new application of the previously reported skin-integrated nanomesh electronics ([ 12 ][11]) to wearable physical sensing with unprecedented performance. Future work may involve the further examination of fundamental mechanisms for the on-skin imperceptibility of the nanomesh pressure sensor, the systematic study of the skin-integrated pressure sensor performance for grasping objects of different materials and properties (such as insulating versus conductive, hard versus soft, and smooth versus textured), and the scalable fabrication of pixelated nanomesh pressure sensors in a large area with high density. The nanomesh pressure sensor could record tactile signals of human-hand manipulation that could provide superior sensing performance and zero data artifacts over existing instrumented gloves and e-skins. Multimodal sensation and nonobstructive skin integration are two important features that are desirable in e-skin designs. The studies reported by You et al. and Lee et al. , respectively, provide new solutions to better realize these attractive features with simplified device structures and enhanced sensing performance without impeding natural sensation. These results will inspire new sensor designs and lead to applications of e-skins as wearable health care monitoring, sensory prosthetic and robotic devices, and high-performance human-machine interfaces. 1. [↵][13]1. J. C. Yang et al ., Adv. Mater. 31, 1904765 (2019). [OpenUrl][14] 2. 1. T. R. Ray et al ., Chem. Rev. 119, 5461 (2019). [OpenUrl][15] 3. [↵][16]1. T. Someya, 2. M. Amagai , Nat. Biotechnol. 37, 382 (2019). [OpenUrl][17][CrossRef][18][PubMed][19] 4. [↵][20]1. I. You et al ., Science 370, 961 (2020). [OpenUrl][21][CrossRef][22] 5. [↵][23]1. S. Lee et al ., Science 370, 966 (2020). [OpenUrl][24][CrossRef][25] 6. [↵][26]1. A. Zimmerman, 2. L. Bai, 3. D. D. Ginty , Science 346, 950 (2014). [OpenUrl][27][Abstract/FREE Full Text][28] 7. [↵][29]1. S. Jeon, 2. S.-C. Lim, 3. T. Q. Trung, 4. M. Jung, 5. N.-E. Lee , Proc. IEEE 107, 2065 (2019). [OpenUrl][30] 8. [↵][31]1. C. Gainaru et al ., J. Phys. Chem. B 120, 11074 (2016). [OpenUrl][32][CrossRef][33][PubMed][34] 9. [↵][35]1. B. A. Mei, 2. O. Munteshari, 3. J. Lau, 4. B. Dunn, 5. L. Pilon , J. Phys. Chem. C 122, 194 (2018). [OpenUrl][36] 10. [↵][37]1. S. Sundaram et al ., Nature 569, 698 (2019). [OpenUrl][38][CrossRef][39][PubMed][40] 11. [↵][41]1. E. D'Anna et al ., Sci. Robot. 4, eaau8892 (2019). [OpenUrl][42] 12. [↵][43]1. A. Miyamoto et al ., Nat. Nanotechnol. 12, 907 (2017). [OpenUrl][44][CrossRef][45][PubMed][46] Acknowledgments: X.L. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (RGPIN-2017-06374). [1]: #ref-1 [2]: #ref-3 [3]: #ref-4 [4]: #ref-5 [5]: #ref-6 [6]: #ref-7 [7]: #ref-8 [8]: #ref-9 [9]: #ref-10 [10]: #ref-11 [11]: #ref-12 [12]: pending:yes [13]: #xref-ref-1-1 "View reference 1 in text" [14]: {openurl}?query=rft.jtitle%253DAdv.%2BMater%26rft.volume%253D31%26rft.spage%253D1904765%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [15]: {openurl}?query=rft.jtitle%253DChem.%2BRev%26rft.volume%253D119%26rft.spage%253D5461%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [16]: #xref-ref-3-1 "View reference 3 in text" [17]: {openurl}?query=rft.jtitle%253DNat.%2BBiotechnol%26rft.volume%253D37%26rft.spage%253D382%26rft_id%253Dinfo%253Adoi%252F10.1038%252Fs41587-019-0079-1%26rft_id%253Dinfo%253Apmid%252F30940942%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [18]: /lookup/external-ref?access_num=10.1038/s41587-019-0079-1&link_type=DOI [19]: /lookup/external-ref?access_num=30940942&link_type=MED&atom=%2Fsci%2F370%2F6519%2F910.atom [20]: #xref-ref-4-1 "View reference 4 in text" [21]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DYou%26rft.auinit1%253DI.%26rft.volume%253D370%26rft.issue%253D6519%26rft.spage%253D961%26rft.epage%253D965%26rft.atitle%253DArtificial%2Bmultimodal%2Breceptors%2Bbased%2Bon%2Bion%2Brelaxation%2Bdynamics%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.aba5132%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [22]: /lookup/external-ref?access_num=10.1126/science.aba5132&link_type=DOI [23]: #xref-ref-5-1 "View reference 5 in text" [24]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DLee%26rft.auinit1%253DS.%26rft.volume%253D370%26rft.issue%253D6519%26rft.spage%253D966%26rft.epage%253D970%26rft.atitle%253DNanomesh%2Bpressure%2Bsensor%2Bfor%2Bmonitoring%2Bfinger%2Bmanipulation%2Bwithout%2Bsensory%2Binterference%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.abc9735%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [25]: /lookup/external-ref?access_num=10.1126/science.abc9735&link_type=DOI [26]: #xref-ref-6-1 "View reference 6 in text" [27]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DZimmerman%26rft.auinit1%253DA.%26rft.volume%253D346%26rft.issue%253D6212%26rft.spage%253D950%26rft.epage%253D954%26rft.atitle%253DThe%2Bgentle%2Btouch%2Breceptors%2Bof%2Bmammalian%2Bskin%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.1254229%26rft_id%253Dinfo%253Apmid%252F25414303%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [28]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEyOiIzNDYvNjIxMi85NTAiO3M6NDoiYXRvbSI7czoyMjoiL3NjaS8zNzAvNjUxOS85MTAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9 [29]: #xref-ref-7-1 "View reference 7 in text" [30]: {openurl}?query=rft.jtitle%253DProc.%2BIEEE%26rft.volume%253D107%26rft.spage%253D2065%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [31]: #xref-ref-8-1 "View reference 8 in text" [32]: {openurl}?query=rft.jtitle%253DJ.%2BPhys.%2BChem.%2BB%26rft.volume%253D120%26rft.spage%253D11074%26rft_id%253Dinfo%253Adoi%252F10.1021%252Facs.jpcb.6b08567%26rft_id%253Dinfo%253Apmid%252F27681664%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [33]: /lookup/external-ref?access_num=10.1021/acs.jpcb.6b08567&link_type=DOI [34]: /lookup/external-ref?access_num=27681664&link_type=MED&atom=%2Fsci%2F370%2F6519%2F910.atom [35]: #xref-ref-9-1 "View reference 9 in text" [36]: {openurl}?query=rft.jtitle%253DJ.%2BPhys.%2BChem.%2BC%26rft.volume%253D122%26rft.spage%253D194%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [37]: #xref-ref-10-1 "View reference 10 in text" [38]: {openurl}?query=rft.jtitle%253DNature%26rft.volume%253D569%26rft.spage%253D698%26rft_id%253Dinfo%253Adoi%252F10.1038%252Fs41586-019-1234-z%26rft_id%253Dinfo%253Apmid%252F31142856%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [39]: /lookup/external-ref?access_num=10.1038/s41586-019-1234-z&link_type=DOI [40]: /lookup/external-ref?access_num=31142856&link_type=MED&atom=%2Fsci%2F370%2F6519%2F910.atom [41]: #xref-ref-11-1 "View reference 11 in text" [42]: {openurl}?query=rft.jtitle%253DSci.%2BRobot.%26rft.volume%253D4%26rft.spage%253D8892eaau%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [43]: #xref-ref-12-1 "View reference 12 in text" [44]: {openurl}?query=rft.jtitle%253DNat.%2BNanotechnol%26rft.volume%253D12%26rft.spage%253D907%26rft_id%253Dinfo%253Adoi%252F10.1038%252Fnnano.2017.125%26rft_id%253Dinfo%253Apmid%252F28737748%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [45]: /lookup/external-ref?access_num=10.1038/nnano.2017.125&link_type=DOI [46]: /lookup/external-ref?access_num=28737748&link_type=MED&atom=%2Fsci%2F370%2F6519%2F910.atom


When AI Sees a Man, It Thinks 'Official.' A Woman? 'Smile'

WIRED

Turns out, computers do too. When US and European researchers fed pictures of congressmembers to Google's cloud image recognition service, the service applied three times as many annotations related to physical appearance to photos of women as it did to men. The top labels applied to men were "official" and "businessperson;" for women they were "smile" and "chin." "It results in women receiving a lower status stereotype: That women are there to look pretty and men are business leaders," says Carsten Schwemmer, a postdoctoral researcher at GESIS Leibniz Institute for the Social Sciences in Köln, Germany. He worked on the study, published last week, with researchers from New York University, American University, University College Dublin, University of Michigan, and nonprofit California YIMBY.