Plotting

Software


30 of the best Harvard University courses you can take online for free

Mashable

CS50's Introduction to Game Development(opens in a new tab) CS50's Introduction to Programming with Python(opens in a new tab) CS50's Web Programming with Python and JavaScript(opens in a new tab)


The Leak That Has Big Tech and Regulators Panicked

Slate

In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out.


To PiM or Not to PiM

Communications of the ACM

A 20nm 6GB function-in-memory DRAM, based on HBM2 with a 1.2 TFLOPS programmable computing unit using bank-level parallelism, for machine learning applications.


On the (In)Security of ElGamal in OpenPGP

Communications of the ACM

Let G be a group and g G a generator. To create a key pair (sk, pk), pick a random integer x, compute the element X gx, and output (sk, pk): (x, X). Given pk, to encrypt a message M, pick an ephemeral random integer y, compute the elements Y gy and Z Xy gxy, and output C (C1, C2): (Y, M · Z) as the ciphertext. Given sk, to decrypt C, first recover element Z from C1 as per Z Yx gyx and then use C2, Z to recover M C2/Z. To instantiate the scheme, the following details have to be fixed: Which group G shall be used?



The open-source AI boom is built on Big Tech's handouts. How long will it last?

MIT Technology Review

Companies like Google--which revealed at its annual product showcase this week that it is throwing generative AI at everything it has, from Gmail to Photos to Maps--were too busy looking over their shoulders to see the real competition coming, writes Sernau: "While we've been squabbling, a third faction has been quietly eating our lunch." Greater access to these models has helped drive innovation--it can also help catch their flaws. AI won't thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious. Most open-source releases still stand on the shoulders of giant models put out by big firms with deep pockets.


Pushing Buttons: Building a gaming PC is painstaking and humbling – I can't wait to do it again

The Guardian

Next week I am going to build a gaming PC. I've done it once before and wrote an article about what a nightmare the process was – although the issue turned out to be with the USB stick I used to install the motherboard update patch and … well, don't get me started. The thing is, I figured it out because when you have played PC games for as long as I have, you know that figuring technical stuff out is a key part of the experience. While games consoles have always been pure plug-and-play experiences, PC games have definitely not. When I started playing in the early 1990s, they came on multiple floppy discs – The Secret of Monkey Island was on eight – and you had to keep swapping them in and out of the drive, like feeding a voracious robot.


20 of the best Harvard University courses you can take online for free

Mashable

CS50's Introduction to Game Development(opens in a new tab) CS50's Introduction to Programming with Python(opens in a new tab) CS50's Web Programming with Python and JavaScript(opens in a new tab)


From Code Complexity Metrics to Program Comprehension

Communications of the ACM

Code is hardly ever developed from scratch. Rather, new code typically needs to integrate with existing code and is dependent upon existing libraries. Two recent studies found that developers spend, on average, 58% and 70% of their time trying to comprehend code but only 5% of their time editing it.32,51 This implies that reading and understanding code is very important, both as an enabler of development and as a major cost factor during development. But as anyone who tries to read code can attest, it is hard to understand code written by others. This is commonly attributed, at least in part, to the code's complexity: the more complex the code, the harder it is to understand, and by implication, to work with. Identifying and dealing with complexity is considered important because the code's complexity may slow down developers and may even cause them to misunderstand it--possibly leading to programming errors. Conversely, simplicity is often extolled as vital for code quality. To gain a sound understanding of code complexity and its consequences, we must operationalize this concept. This means we need to devise ways to characterize it, ideally in a quantitative manner. And indeed, many metrics have been suggested for code complexity. Such metrics can then be used for either of two purposes. In industry, metrics are used to make predictions regarding code quality and development effort.


Research for Practice: The Fun in Fuzzing

Communications of the ACM

For this edition of Research for Practice (RfP), we enlisted the help of Stefan Nagy, an assistant professor in the Kahlert School of Computing at the University of Utah. We thank John Regehr--who has written for RfP before--for making this introduction. Nagy takes us on a tour of recent research in software fuzzing, or the systematic testing of programs via the generation of novel or unexpected inputs. The first paper he discusses extends the state of the art in coverage-guided fuzzing (which measures the testing progress in terms of program syntax) with the semantic notion of "likely invariants," inferred via techniques from property-based testing. The second explores encoding domain-specific knowledge about certain bug classes (for example, use-after-free errors) into test-case generation. His last selection takes us through the looking glass, randomly generating entire C programs and using differential analysis to compare traces of optimized and unoptimized executions, in order to find bugs in the compilers themselves.