Archive for category AI

If you let AI do your job for you, you will be sorry

Yesterday I completed a mandatory 2 professional-development-hour (pdh) ethics class to renew my Minnesota Professional Engineer (PE) license. I chose one created this year to advise PEs on AI risk management and liability. The class included three very alarming case studies where AI ran amok—literally in one incident where a bot-driven road grader veered into a building. The take-home for us engineers—we remain responsible for our AI assistants, who must be verified and validated* before being deployed.

This schooling on maintaining control of AI comes on the heels of Claude—Anthropic’s AI—writing a blog post under my name after being trained on my keep-it-simple, make-it-fun (KISMIF) style. This was done as an experiment by a colleague, who based it on a webinar I presented a few years ago. Although done in an engaging manner and mostly correct statistically, I gave this writeup a hard pass going under my byline. This will happen only over my dead body (after that, I have no care about being ghost-written, ha ha). To avoid being branded as a Luddite,** I am open to derivations of content developed by me, provided this is acknowledged, e.g., “adapted from a webinar by Mark Anderson,” and edited by a person or persons with good writing skills and knowledge of the content (such as me).

By the way, our development team is benefitting greatly by Claude’s coding and generation of graphics for our next generation of Stat-Ease software. AI tools in the hands of experts always being on guard for hallucinatory behavior provide great leverage on the output of code. Likewise for writing, music and works of art, but is that a good thing or a bad thing? Debatable.

“AI won’t replace humans. But humans who use AI will replace those who don’t.”

– Sam Altman, CEO of OpenAI—developer of ChatGPT

*To learn how these quality assurance aspects differ, read this April 3 post by Geeks for Geeks

**Though as suggested by Brookings, when it comes to AI, perhaps we should all be Luddites

No Comments

Artificial intelligence (AI) enters the “age of inference”

Oxford Languages defines “inference” as “a conclusion reached on the basis of evidence and reasoning.” Achieving accurate inferences with a minimum amount of work is of utmost importance in my field of design and analysis of experiments for industrial R&D. So, Amin Vahdat, Google’s Vice President and General Manager of ML, Systems, and Cloud AI, got my full attention by promoting their latest developments in AI as the beginning of the “age of inference” in his April 9th blog on Ironwood —their 7th-generation TPU (tensor processing unit). The blog announces that by “combining the best of Google DeepMind and Google Research with Google Cloud” will “further accelerate scientific breakthroughs, with a mission to become the most capable platform for global research and scientific discovery.”

An April 9 report by VentureBeat offers up an impressive array of statistics on Ironwood with much hyberbolic, high-tech jargon such as “when scaled to 9,216 chips per pod, Ironwood delivers 42.5 exaflops of computing power — dwarfing El Capitan‘s 1.7 exaflops, currently the world’s fastest supercomputer.”

I don’t grasp the units of measure, but it sure sounds great! Perhaps AI will fill in for the ongoing cuts in USA’s funding for institutional research and ripple effects on industrial R&D. I hope so!

However, despite the rapid development of AI, it may be a long while before it gets embraced by researchers. For example, Aidan Toner-Rodgers, an MIT economist, published a paper on Artificial Intelligence, Scientific Discovery and Product Innovation last November that reported 82% of R&D scientists (over 1000 surveyed) being dissatisfied with AI due to it decreasing their creativity and skill utilization.

On a positive note, Toner-Rodgers asserts that the output of “top” researchers nearly doubles due to how they “leverage their domain knowledge to prioritize AI suggestions.” That is the best of both worlds—human intelligence (HI) combined with artificial intelligence (AI).

PS: An April Nature News article summarizes results from a survey of 4,000 researchers that addressed broader questions about AI, not the “what’s in it for me” focus of the Toner-Rodgers’ poll. For example, scientists viewed the glass slightly more than half full for AI whereas nearly all the general public feel it creates more risk than benefit. Interesting!

No Comments