Welcome to Thursday Things! If you enjoy this edition, please click the heart icon in the header or at the end of the post to let me know.
A peaceful scene. Because why not? Photo by David Becker on Unsplash
Fun with Gmail
I’m going to smack Google in the next item of this edition, but it’s not like I don’t use the Big G’s services myself. Not so long ago I learned a little Gmail trick that I find useful and I want to share it with you via this article, that explains much better than I could:
Multiple Google Email Addresses – One Gmail Account
Google Gmail is a very slick, free email product. One Gmail feature that you may not be aware of is that multiple Google email addresses can be created from one Gmail account. These bonus email addresses are easy to create and manage and can take a few different forms.
Nifty! But why would you want multiple email addresses from one account?
Having multiple Gmail addresses can provide a range of uses including easily separating personal and business email and tracking incoming email from specific subscriptions and mailing list. There are three main methods for expanding the number of usable Google email addresses that you can have from a single Gmail account:
Using the @googlemail.com domain.
Using the “dot” or period in your email name.
Using the plus sign “+” at the end of your name and adding extra characters.
If you have a Gmail account, read the article for the details and enjoy!
AI for All!
We’ve highlighted the arrival of ChatGPT and similar “large language model” (LLM) AIs previously in Thursday Things. These tools have also become much more visible an prominent in recent months, so I’ll assume you know basically what they are.
One aspect of LLM AIs that we may not focus much on is that they are expensive to create, “train”, and operate. They require a tremendous amount of processing power, which is why most of the AIs we’ve heard about are associated with the usual deep pocket big tech suspects: Microsoft, Google, Apple, Facebook/Meta, etc.
But that may be about to change:
The genie escapes: Stanford copies the ChatGPT AI for less than $600
Stanford's Alpaca AI performs similarly to the astonishing ChatGPT on many tasks – but it's built on an open-source language model and cost less than US$600 to train up. It seems these godlike AIs are already frighteningly cheap and easy to replicate.
Six months ago, only researchers and boffins were following the development of large language models. But ChatGPT's launch late last year sent a rocket up humanity's backside: machines are now able to communicate in a way pretty much indistinguishable from humans. They're able to write text and even programming code across a dizzying array of subject areas in seconds, often of a very high standard. They're improving at a meteoric rate, as the launch of GPT-4 illustrates, and they stand to fundamentally transform human society like few other technologies could, by potentially automating a range of job tasks – particularly among white-collar workers – people might previously have thought of as impossible.
Many other companies – notably Google, Apple, Meta, Baidu and Amazon, among others – are not too far behind, and their AIs will soon be flooding into the market, attached to every possible application and device. Language models are already in your search engine if you're a Bing user, and they'll be in the rest soon enough. They'll be in your car, your phone, your TV, and waiting on the other end of the line any time you try to phone a company. Before too long, you'll be seeing them in robots.
ChatGPT and its relative are amazing and the technology will be transformational, changing many aspects of how we live, learn, work, think, create, and communicate. My question is — do we really want all technology to be controlled by the same giant tech companies that already have way too much information about every aspect of our lives? Probably not.
So the Stanford team’s achievement is a promising development:
But what about a language model you can build yourself for 600 bucks? A team of Stanford researchers has done just that, and its impressive performance highlights just how quickly this entire sector, and its awesome capabilities, might rapidly spin out of control.
Note that “out of control” line. We’ll come back to it.
Read the article for a description of the process, but basically what Stanford did was take an open source LLM from Meta, copy OpenAI’s open source set of training questions that fine tune the AI to function as a ChatGPT like tool, and create their own LLM — Alpaca!
Next, they tested the resulting model, which they called Alpaca, against ChatGPT's underlying language model across a variety of domains including email writing, social media and productivity tools. Alpaca won 90 of these tests, GPT won 89.
All for about $600. And it’s almost as good as ChatGPT, which in itself is impressive to the point of being magic.
This development opens the door to what I think our new AI Age should be like — AI for All. We should each have access to our own personal AI that we can train, modify, and use as we see fit, without being beholden to the ever-changing, self-serving, and perhaps politically or ideologically influenced terms and conditions, restrictions, and limits imposed by our dear friends in Big Tech.
Will that happen? We’ll see. That’s a discussion beyond the scope of Thursday Things.
But the possibility exists:
What does this all mean? Well, it means that unlimited numbers of uncontrolled language models can now be set up – by people with machine learning knowledge who don't care about terms and conditions or software piracy – for peanuts.
It also muddies the water for commercial AI companies working to develop their own language models; if so much of the time and expense involved is incurred in the post-training phase, and this work can be more or less stolen in the time it takes to answer 50 or 100,000 questions, does it make sense for companies to keep spending this cash?
And for the rest of us, well, it's hard to say, but the awesome capabilities of this software could certainly be of use to an authoritarian regime, or a phishing operation, or a spammer, or any number of other dodgy individuals.
Yes, all of those thing are going to happen, just like all current internet technology is used for those purposes. But the awesome capabilities can be of use to people to do good, beneficial, and creative things too.
Run, little Alpaca! Run and be free!
I’m the most woolly and adorable AI ever. Photo by Sébastien Goldberg on Unsplash
Self-control reconsidered
Self-control is generally considered a desirable trait — mainly because it is. But, as with anything, there can be a downside, according to this piece in Psyche:
Wish you had more self-control? You should hear the downsides
Some of the costs of high self-control are social and reputational. Imagine a prototypical highly conscientious individual – someone who always wakes up early, never allows for any distractions from their work, and adheres to a strict diet, budget and workout regimen. You might view them as ambitious, because of their determination and discipline. However, for those same reasons, you might also view this person as mechanical, uninteresting, uptight or even cold.
Well, that’s exactly the response you’d expect from people lacking self-control, isn’t it?
In fact, that’s what we’ve found in our research. In our 2022 study, we presented participants with a description of either a high self-control person (similar to the description we gave you just now) or an average self-control person, and then asked about their perceptions of the character we told them about. We found that, on average, participants rated the person with high self-control as more robot-like and less warm. Moreover, they saw the person who acted on their impulses as more real and genuine – in other words, they saw the person with high self-control as less authentic.
Really? If you’re a person with high self-control, then acting with self-control is being authentically you, is it not?
All this tells me is that people with low self-control can’t think clearly. Probably as a result of all the excessive sugar, alcohol, and other substances they routinely consume. Because they lack self-control.
High self-control can also backfire socially in another way, leading a person to be seen as having less power and status. This is because when people act impulsively, such as speaking their mind or indulging themselves, it can be interpreted as a signal of social power in the sense that the person is not concerned with censoring themselves or with conforming to social expectations. In contrast, when a person with high self-control consistently inhibits their impulsive responses, they’re seen as more predictable and keen to play by the rules, which can lead others to see them as weaker.
That’s certainly a take, low self-control people. But who do you think makes the rules?
These social downsides to high self-control can go beyond perceptions, and affect the way people with high self-control are actually treated by others, such as being excluded from social events.
Uh-huh. I’d say “excluded” presumes they wanted to go to your social event in the first place. Anyway, high self-control people can relax.1 Most of the alleged downsides identified in this research are that people with less self-control think you’re no fun and may not invite you to parties you didn’t want to go to anyway because you’re too busy running the world.
Besides, Alpaca will be your friend…
“It’s lonely at the top. Thank God.” Photo by Aziz Acharki on Unsplash
Thank you for reading!
Please click the hearts, leave a comment, and use the share feature to send this issue to a friend who might enjoy it. See you next Thursday!
To the extent “relax” is a thing you do.