Goto

Collaborating Authors

Results


5 Ways Machine Learning Can Thwart Phishing Attacks – Enterprise Irregulars – IAM Network

#artificialintelligence

Mobile devices are popular with hackers because they're designed for quick responses based on minimal contextual information. Verizon's 2020 Data Breach Investigations Report (DBIR) found that hackers are succeeding with integrated email, SMS and link-based attacks across social media aimed at stealing passwords and privileged access credentials. And with a growing number of breaches originating on mobile devices according to Verizon's Mobile Security Index 2020, combined with 83% of all social media visits in the United States are on mobile devices according to Merkle's Digital Marketing Report Q4 2019, applying machine learning to harden mobile threat defense deserves to be on any CISOs' priority list today. How Machine Learning Is Helping To Thwart Phishing Attacks Google's use of machine learning to thwart the skyrocketing number of phishing attacks occurring during the Covid-19 pandemic provides insights into the scale of these threats. During a typical week in April of this year, Google's G-Mail Security team saw 18M daily malware and phishing emails related to Covid-19.


Computational social science: Obstacles and opportunities

Science

The field of computational social science (CSS) has exploded in prominence over the past decade, with thousands of papers published using observational data, experimental designs, and large-scale simulations that were once unfeasible or unavailable to researchers. These studies have greatly improved our understanding of important phenomena, ranging from social inequality to the spread of infectious diseases. The institutions supporting CSS in the academy have also grown substantially, as evidenced by the proliferation of conferences, workshops, and summer schools across the globe, across disciplines, and across sources of data. But the field has also fallen short in important ways. Many institutional structures around the field—including research ethics, pedagogy, and data infrastructure—are still nascent. We suggest opportunities to address these issues, especially in improving the alignment between the organization of the 20th-century university and the intellectual requirements of the field. We define CSS as the development and application of computational methods to complex, typically large-scale, human (sometimes simulated) behavioral data ([ 1 ][1]). Its intellectual antecedents include research on spatial data, social networks, and human coding of text and images. Whereas traditional quantitative social science has focused on rows of cases and columns of variables, typically with assumptions of independence among observations, CSS encompasses language, location and movement, networks, images, and video, with the application of statistical models that capture multifarious dependencies within data. A loosely connected intellectual community of social scientists, computer scientists, statistical physicists, and others has coalesced under this umbrella phrase. Generally, incentives and structures at most universities are poorly aligned for this kind of multidisciplinary endeavor. Training tends to be siloed. Integrating computational training directly into social science (e.g., teaching social scientists how to code) and social science into computational disciplines (e.g., teaching computer scientists research design) has been slow. Collaboration is often not encouraged, and too often is discouraged. Computational researchers and social scientists tend to be in different units in distinct corners of the university, and there are few mechanisms to bring them together. Decentralized budgeting models discourage collaboration across units, often producing inefficient duplication. Research evaluation exercises such as the United Kingdom's Research Excellence Framework, which allocate research funding, typically focus within disciplines, meaning that multidisciplinary research may be less well recognized and rewarded. Similarly, university promotion procedures tend to underappreciate multidisciplinary scholars. Computational research infrastructures at universities too often cannot fully support analysis of large-scale, sensitive data sets, with the requirements of security, access to a large number of researchers, and requisite computational power. To the extent these issues have been partially resolved in the academy (e.g., with genomic data), lessons have not fully made their way into practice in CSS. Current paradigms for sharing the kinds of large-scale, sensitive data used in CSS offer a mixed bag. There have been successes built on partnerships with government, especially in economics, from the study of inequality ([ 2 ][2]) to the dynamics of labor markets ([ 3 ][3]). There are emerging, well-resourced models of administrative data research facilities serving as platforms for analyzing microlevel data while preserving privacy ([ 4 ][4]). These offer important lessons for potential collaboration with private companies, including the development of methodologies to keep sensitive data secure, yet accessible for analyses (e.g., innovations in differential privacy). The value proposition for private companies is different and there has been predictably less progress. Data possessed by government agencies are held in trust for the public, whereas data held by companies are typically seen as a key proprietary asset. Public accountability inherent in sharing data is likely seen as a positive for the relevant stakeholders for government agencies, but generally, far less so for shareholders for private companies. Access to data from private companies is thus rarely available to academics, and when it is, it is typically granted through a patchwork system in which some data are available through public application programming interfaces (APIs), other data only by working with (and often physically in) the company in question, and still other data through personal connections and one-off arrangements, often governed by nondisclosure agreements and subject to potential conflicts of interest. An alternative has been to use proprietary data collected for market research (e.g., Comscore, Nielsen), with methods that are sometimes opaque and a pricing structure that is prohibitive to most researchers. We believe that this approach is no longer acceptable as the mainstay of CSS, as pragmatic as it might seem in light of the apparent abundance of such data and limited resources available to a research community in its infancy. We have two broad concerns about data availability and access. First, many companies have been steadily cutting back data that can be pulled from their platforms ([ 5 ][5]). This is sometimes for good reasons—regulatory mandates (e.g., the European Union General Data Protection Regulation), corporate scandal (Cambridge Analytica and Facebook)—however, a side effect is often to shut down avenues of potentially valuable research. The susceptibility of data availability to arbitrary and unpredictable changes by private actors, whose cooperation with scientists is strictly voluntary, renders this system intrinsically unreliable and potentially biased in the science it produces. Second, data generated by consumer products and platforms are imperfectly suited for research purposes ([ 6 ][6]). Users of online platforms and services may be unrepresentative of the general population, and their behavior may be biased in unknown ways. Because the platforms were never designed to answer research questions, the data of greatest relevance may not have been collected (e.g., researchers interested in information diffusion count retweets because that is what is recorded), or may be collected in a way that is confounded by other elements of the system (e.g., inferences about user preferences are confounded by the influence of the company's ranking and recommendation algorithms). The design, features, data recording, and data access strategy of platforms may change at any time because platform owners are not incentivized to maintain instrumentation consistency for the benefit of research. For these reasons, research derived from such “found” data is inevitably subject to concerns about its internal and external validity, and platform-based data, in particular, may suffer from rapid depreciation as those platforms change ([ 7 ][7]). Moreover, the raw data are often unavailable to the research community owing to privacy and intellectual property concerns, or may become unavailable in the future, thereby impeding the reproducibility and replication of results. Finally, there has been a failure to develop “rules of the road” for scientific research. Despite prior calls to develop such guidance, and despite major lapses that undermined public trust, the field has failed to fully articulate clear principles and mechanisms for collecting and analyzing digital data about people while minimizing the potential for harm. Few universities provide technical, legal, regulatory, or ethical guidance to properly contain and manage sensitive data. Institutional Review Boards are still generally not attuned, and consistent in their response, to the distinct ethical challenges around digital trace data. The recent modification of the Common Rule in the United States, which concerns ethics of human subjects research, did not fully address these problems. For example, in a networked world, how should we deal with the fact that sharing information about oneself intrinsically provides signals about those with whom one is connected? The challenges around consent highlight the importance of managing security of sensitive data and also of reimagining institutional review processes and ethical norms; yet few universities integrate infrastructure and oversight processes to minimize the risks of security lapses. Cambridge Analytica, and other, similar, events, have engendered an impassioned debate around data sovereignty. Battle lines have been drawn between privacy advocates and companies, where the former seek to minimize the collection and analysis of all individual data, whereas the latter seek to justify their collection strategies on the grounds of providing value to consumers. ![Figure][8] Resources and rules, incentives and innovations Often missing in public debates are voices for policies that would encourage or mandate the ethical use of private data that preserves public values like privacy, autonomy, security, human dignity, justice, and balance of power to achieve important public goals—whether to predict the spread of disease, shine a light on societal issues of equity and access, or the collapse of the economy. Also often missing are investments in infrastructures in the academy that could power knowledge production and maintain privacy. In response to these problems, we make five recommendations. ### Strengthen collaboration Despite the limitations noted above, data collected by private companies are too important, too expensive to collect by any other means, and too pervasive to remain inaccessible to the public and unavailable for publicly funded research ([ 8 ][9]). Rather than eschewing collaboration with industry, the research community should develop enforceable guidelines around research ethics, transparency, researcher autonomy, and replicability. We anticipate that many approaches will emerge in coming years that will be incentive compatible for involved stakeholders. The most widespread and longest-standing model is open, aggregated data such as Census data. The aforementioned models developed to share government data, with an emphasis on security and privacy, offer promise in working with corporate data. The United Nations Sustainable Development Goals call for partnerships on public-private data sources to provide a wide variety of new, very rich neighborhood-by-neighborhood measures across the entire world ([ 9 ][10]), and national statistical offices in every corner of the world are quietly working on producing such products, but progress is slow owing to lack of funding. The development of secure administrative data centers supplemented by an administrative infrastructure for granting access, monitoring outputs, and enforcing compliance with privacy and ethics rules offers one model for moving forward. As noted above, this model has already been demonstrated in the domain of government administrative data; as well as in a few cases, by telecommunications and banking companies. Similar models are rare—but becoming more common—for academic research. The Open Data Infrastructure for Social Science and Economic Innovations in the Netherlands is one example. Facebook has iterated through multiple models for collaboration with academics. In its early years, it focused on one-off collaborations, largely informally negotiated. After the 2016 election, it launched Social Science One, providing access to aggregate data of news consumption, which, despite being well resourced, has faced challenges in providing data ([ 10 ][11]). Coronavirus disease 2019 (COVID-19) has played a particular role in creating partnerships between researchers and companies to produce insights regarding the trajectory of the disease. (COVID-19 has, in many countries, including the United States, also illuminated the fractured and politically contingent nature of much public data regarding the disease.) Twitter has provided a streaming API regarding COVID-19 for approved researchers. Similarly, location data companies such as Cuebiq have provided access to anonymized mobility data. There remain open questions as to what extent these data-sharing efforts will continue after the disease settles into the history books and, if so, how to robustly align them with critical research norms in academia, such as transparency, reproducibility, replication, and consent. The election examples with respect to Facebook highlight the potentially adversarial role between researchers and corporations. A central contemporary question for the field of CSS (as discussed below) is in what ways particular sociotechnical systems are playing positive and negative roles in society. This tension may partially (but not entirely) be resolved if companies feel that it is in their long-term interest to transparently study and anticipate these issues. Even in the most optimistic scenario, however, there will be a disjuncture between the public interest in the insights that research could produce, and corporate interests. Academia, more generally, needs to provide carefully developed guidelines for professional practice. What control can companies have over the research process? It clearly is not acceptable for a company to have veto power over the content of a paper; but the reality of any data-sharing agreement is that there are negotiated domains of inquiry. What are the requirements for providing data for replication? What are the needs of researchers for access to a company's internal data management and curation processes? ### New data infrastructures Privacy-preserving, shared data infrastructures, designed to support scientific research on societally important challenges, could collect scientifically motivated digital traces from diverse populations in their natural environments, as well as enroll massive panels of individuals to participate in designed experiments in large-scale virtual labs. These infrastructures could be driven by citizen contributions of their data and/or their time to support the public good, or in exchange for explicit compensation. These infrastructures should use state-of-the-art security, with an escalation checklist of security measures depending on the sensitivity of the data. These efforts need to occur at both the university and cross-university levels. Finally, these infrastructures should capture and document the metadata that describe the data collection process and incorporate sound ethical principles for data collection and use. The Secure Data Center at the GESIS Leibniz Institute for the Social Sciences is an example of shared infrastructure for research on sensitive data. Further, it is important to capture the algorithm-driven behavior of the major platforms over time ([ 11 ][12], [ 12 ][13]), both because algorithmic behavior is of increasing importance and because algorithmic changes create enormous artifacts in platform-based data collection. It is critical that legal frameworks allow and mandate ethical data access and collection about individuals and rigorous auditing of platforms. ### Ethical, legal, and social implications We need to develop ethical frameworks commensurate with scientific opportunities and emergent risks of the 21st century. Social science can help us understand the structural inequalities of society, and CSS needs to open up the black box of the data-driven algorithms that make so many consequential decisions, but which can also embed biases ([ 13 ][14]). The Human Genome Project devoted more than $300 million as part of its Ethical, Legal, and Social Implications program “to ensure that society learns to use the information only in beneficial ways” ([ 14 ][15]). There are no off-the-shelf solutions on ethical research. Professional associations need to work on the development of new ethical guidelines—the guidelines developed by the Association of Internet Researchers offer one example of an effort to address a slice of the issue. Large investments, by public funders as well as private foundations, are needed to develop informed regulatory frameworks and ethical guidance for researchers, and to guide practice in the field in government and organizations. ### Reorganize the university Computation is adjacent to an increasing number of fields—from astronomy to the humanities. There needs to be innovation in the organization of typically siloed universities to reflect this, with the development of structures that connect diverse researchers, where collaborating across silos is professionally rewarded. Successful examples of institutional practice include the appointment of faculty with multi-unit affiliations (e.g., across computer science and social science disciplines) and of research centers that physically collocate faculty from different fields, as well as allocation of internal funding to support multidisciplinary collaboration. There needs to be a fundamental reconceiving of the development of undergraduate and graduate curricula for training a new generation of scientists. There must be pervasive efforts within the university to empower and enforce ethical research practices—e.g., centrally coordinated, secure data infrastructures. ### Solve real-world problems The preceding recommendations will require resources, from public and private sources, that are extraordinary by current standards of social science funding. To justify such an outsized investment, computational social scientists must make the case that the result will be more than the publication of journal articles of interest primarily to other researchers. They must articulate how the combination of academic, industrial, and governmental collaboration and dedicated scientific infrastructure will solve important problems for society—saving lives; improving national security; enhancing economic prosperity; nurturing inclusion, diversity, equity, and access ; bolstering democracy; etc. Current applications of CSS in the global response to the pandemic are emblematic of the broader potential of the field. Beyond generating results that are meaningful outside of academia, the pursuit of this objective may also lead to more replicable, cumulative, and coherent science ([ 15 ][16]). 1. [↵][17]1. D. Lazer et al ., Science 323, 721 (2009). [OpenUrl][18][Abstract/FREE Full Text][19] 2. [↵][20]1. R. Chetty, 2. N. Hendren, 3. P. Kline, 4. E. Saez , Q. J. Econ. 129, 1553 (2014). [OpenUrl][21][CrossRef][22] 3. [↵][23]1. J. J. Abowd, 2. J. Haltiwanger , J. Lane, Am. Econ. Rev. 94, 224 (2004). [OpenUrl][24] 4. [↵][25]1. A. Reamer, 2. J. Lane , A Roadmap to a Nationwide Data Infrastructure for Evidence-Based Policymaking (2018); . 5. [↵][26]1. D. Freelon , Polit. Commun. 35, 665 (2018). [OpenUrl][27] 6. [↵][28]1. M. J. Salganik , Bit by Bit: Social Research in the Digital Age (Princeton Univ. Press, 2017). 7. [↵][29]1. K. Munger , Soc. Media Soc 5, 205630511985929 (2019). [OpenUrl][30] 8. [↵][31]Social Science Research Council, To Secure Knowledge: Social Science Partnerships for the Common Good (2018); [www.ssrc.org/to-secure-knowledge/][32]. 9. [↵][33]IEAG, UN, “A World that Counts—Mobilising the Data Revolution for Sustainable Development.” Independent Expert Advisory Group on a Data Revolution for Sustainable Development (2014). 10. [↵][34]1. G. King, 2. N. Persily , “A New Model for Industry-Academic Partnerships” (Working Paper, 2018); . 11. [↵][35]1. A. Hannák et al ., in Proceedings of the 22nd International Conference on World Wide Web (ACM Press, New York, 2013), pp. 527–538. 12. [↵][36]1. I. Rahwan et al ., Nature 568, 477 (2019). [OpenUrl][37] 13. [↵][38]1. Z. Obermeyer, 2. B. Powers, 3. C. Vogeli, 4. S. Mullainathan , Science 366, 447 (2019). [OpenUrl][39][Abstract/FREE Full Text][40] 14. [↵][41]1. J. E. McEwen et al ., Annu. Rev. Genomics Hum. Genet. 15, 481 (2014). [OpenUrl][42][CrossRef][43][PubMed][44] 15. [↵][45]1. D. J. Watts , Nat. Hum. Behav. 1, 0015 (2017). [OpenUrl][46] [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: #ref-7 [8]: pending:yes [9]: #ref-8 [10]: #ref-9 [11]: #ref-10 [12]: #ref-11 [13]: #ref-12 [14]: #ref-13 [15]: #ref-14 [16]: #ref-15 [17]: #xref-ref-1-1 "View reference 1 in text" [18]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DLazer%26rft.auinit1%253DD.%26rft.volume%253D323%26rft.issue%253D5915%26rft.spage%253D721%26rft.epage%253D723%26rft.atitle%253DSOCIAL%2BSCIENCE%253A%2BComputational%2BSocial%2BScience%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.1167742%26rft_id%253Dinfo%253Apmid%252F19197046%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [19]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEyOiIzMjMvNTkxNS83MjEiO3M6NDoiYXRvbSI7czoyMzoiL3NjaS8zNjkvNjUwNy8xMDYwLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ== [20]: #xref-ref-2-1 "View reference 2 in text" [21]: {openurl}?query=rft.jtitle%253DQ.%2BJ.%2BEcon.%26rft_id%253Dinfo%253Adoi%252F10.1093%252Fqje%252Fqju022%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [22]: /lookup/external-ref?access_num=10.1093/qje/qju022&link_type=DOI [23]: #xref-ref-3-1 "View reference 3 in text" [24]: {openurl}?query=rft.jtitle%253DJ.%2BLane%252C%2BAm.%2BEcon.%2BRev.%26rft.volume%253D94%26rft.spage%253D224%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [25]: #xref-ref-4-1 "View reference 4 in text" [26]: #xref-ref-5-1 "View reference 5 in text" [27]: {openurl}?query=rft.jtitle%253DPolit.%2BCommun.%26rft.volume%253D35%26rft.spage%253D665%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [28]: #xref-ref-6-1 "View reference 6 in text" [29]: #xref-ref-7-1 "View reference 7 in text" [30]: {openurl}?query=rft.jtitle%253DSoc.%2BMedia%2BSoc%26rft.volume%253D5%26rft.spage%253D205630511985929%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [31]: #xref-ref-8-1 "View reference 8 in text" [32]: http://www.ssrc.org/to-secure-knowledge/ [33]: #xref-ref-9-1 "View reference 9 in text" [34]: #xref-ref-10-1 "View reference 10 in text" [35]: #xref-ref-11-1 "View reference 11 in text" [36]: #xref-ref-12-1 "View reference 12 in text" [37]: {openurl}?query=rft.jtitle%253DNature%26rft.volume%253D568%26rft.spage%253D477%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [38]: #xref-ref-13-1 "View reference 13 in text" [39]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DObermeyer%26rft.auinit1%253DZ.%26rft.volume%253D366%26rft.issue%253D6464%26rft.spage%253D447%26rft.epage%253D453%26rft.atitle%253DDissecting%2Bracial%2Bbias%2Bin%2Ban%2Balgorithm%2Bused%2Bto%2Bmanage%2Bthe%2Bhealth%2Bof%2Bpopulations%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.aax2342%26rft_id%253Dinfo%253Apmid%252F31649194%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [40]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEyOiIzNjYvNjQ2NC80NDciO3M6NDoiYXRvbSI7czoyMzoiL3NjaS8zNjkvNjUwNy8xMDYwLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ== [41]: #xref-ref-14-1 "View reference 14 in text" [42]: {openurl}?query=rft.jtitle%253DAnnu.%2BRev.%2BGenomics%2BHum.%2BGenet.%26rft.volume%253D15%26rft.spage%253D481%26rft_id%253Dinfo%253Adoi%252F10.1146%252Fannurev-genom-090413-025327%26rft_id%253Dinfo%253Apmid%252F24773317%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [43]: /lookup/external-ref?access_num=10.1146/annurev-genom-090413-025327&link_type=DOI [44]: /lookup/external-ref?access_num=24773317&link_type=MED&atom=%2Fsci%2F369%2F6507%2F1060.atom [45]: #xref-ref-15-1 "View reference 15 in text" [46]: {openurl}?query=rft.jtitle%253DNat.%2BHum.%2BBehav.%26rft.volume%253D1%26rft.spage%253D0015%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx


5 Ways Machine Learning Can Thwart Phishing Attacks - Enterprise Irregulars

#artificialintelligence

Mobile devices are popular with hackers because they're designed for quick responses based on minimal contextual information. Verizon's 2020 Data Breach Investigations Report (DBIR) found that hackers are succeeding with integrated email, SMS and link-based attacks across social media aimed at stealing passwords and privileged access credentials. And with a growing number of breaches originating on mobile devices according to Verizon's Mobile Security Index 2020, combined with 83% of all social media visits in the United States are on mobile devices according to Merkle's Digital Marketing Report Q4 2019, applying machine learning to harden mobile threat defense deserves to be on any CISOs' priority list today. Google's use of machine learning to thwart the skyrocketing number of phishing attacks occurring during the Covid-19 pandemic provides insights into the scale of these threats. During a typical week in April of this year, Google's G-Mail Security team saw 18M daily malware and phishing emails related to Covid-19.


Technologies that will drive the 'new normal' post-COVID 19

#artificialintelligence

The COVID-19 pandemic is not just a health crisis, but a socio-economic crisis as well. The global economy is projected to decline sharply this year, owing to the disruptions in global markets and value chains. The pandemic-triggered global economic recession will likely be the deepest one in advanced economies since World War II and the first output contraction in emerging and developing economies in at least the past six decades, according to the World Bank's latest Global Economic Prospects report. COVID-19-related confinement measures such as nationwide lockdowns, travel bans, border closures, and social distancing have impacted every individual and organization, regardless of its size, in one way or the other. Overall, the crisis has changed the way we socialize, work, learn, and perform basic day-to-day activities.


5 Ways Machine Learning Can Thwart Phishing Attacks

#artificialintelligence

Mobile devices are popular with hackers because they're designed for quick responses based on minimal contextual information. Verizon's 2020 Data Breach Investigations Report (DBIR) found that hackers are succeeding with integrated email, SMS and link-based attacks across social media aimed at stealing passwords and privileged access credentials. And with a growing number of breaches originating on mobile devices according to Verizon's Mobile Security Index 2020, combined with 83% of all social media visits in the United States are on mobile devices according to Merkle's Digital Marketing Report Q4 2019, applying machine learning to harden mobile threat defense deserves to be on any CISOs' priority list today. Google's use of machine learning to thwart the skyrocketing number of phishing attacks occurring during the Covid-19 pandemic provides insights into the scale of these threats. During a typical week in April of this year, Google's G-Mail Security team saw 18M daily malware and phishing emails related to Covid-19.


Are We Automating STEM? - Connected World

#artificialintelligence

Software developers make up the largest STEM (science, technology, engineering, and math) occupation. Will automation impact it like everything else? New research puts a microscope on STEM jobs and their impact on automation. As I have stated many times before, STEM education directly impacts the technology sector because this is where we are training our next generation (or not), giving them the required education to drive innovation. Whether you believe it or not, young people exposed to STEM education often determines whether young people will be exposed to jobs in STEM-related fields.


6 Ways to Build the Healthcare System of the Future

#artificialintelligence

Healthcare delivery tomorrow will look much different than today for a variety of reasons. Consumer expectations, the emergence of nontraditional players, and a move to value-based care are among the driving forces. Yet nearly all advancements ride on the backbone of technology and the ability to harness a massive quantity of data now being produced. This June, HealthLeaders convened a select group of health system executive thought leaders to discuss the topic, "Healthcare System of the Future." In his keynote address to CEOs, CFOs, CMOs, and CNOs, as well as innovation and revenue cycle executives, John Halamka, MD, MS, president of the Mayo Clinic Platform, discussed the technology stepping stones that will pave the road forward.


Top hacks from Black Hat and DEF CON 2020

#artificialintelligence

We take a closer looking at some of the more unusual security research that was presented at this year's virtual Hacker Summer Camp The annual Hacker Summer Camp traversed from Las Vegas into the wilds of cyberspace this year, thanks to the coronavirus pandemic, but security researchers still rose to the challenge of maintaining the traditions of the event in 2020. As well as tackling core enterprise and web security threats, presenters at both Black Hat and DEF CON 2020 took hacking to weird and wonderful places. Anything with a computer inside was a target – a definition that these days includes cars, ATMs, medical devices, traffic lights, voting systems and much, much more. Security researcher Alan Michaels brought a new meaning to the phrase "insider threat" with a talk about the potential risk posed by implanted medical devices in secure spaces at Black Hat 2020. An aging national security workforce combined with the burgeoning, emerging market for medical devices means that the risk is far from theoretical.


Rite Aid Used Facial Recognition in Stores for Nearly a Decade

WIRED

Just over two weeks after an unprecedented hack led to the compromise of the Twitter accounts of Bill Gates, Elon Musk, Barack Obama, and dozens more, authorities have charged three men in connection with the incident. The alleged "mastermind" is a 17-year-old from Tampa, who will be tried as an adult. There are still plenty of details outstanding about how they might have pulled it off, but court documents show how a trail of bitcoin and IP addresses led investigators to the alleged hackers. A Garmin ransomware hack disrupted more than just workouts during a days-long outage; security researchers see it as part of a troubling trend of "big game hunting" among ransomware groups. In other alarming trends, hackers are breaking into news sites to publish misinformation through their content management systems, giving them an air of legitimacy.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.