No, there’s no drama, no weird thing. It just works, just like any other CCS charger. Elon doesn’t come out and give a free hand-job while you wait or anything. About the only piece of excitement is the mildly confused looks from actual Tesla owners when you roll up.
Also the charging leads are really really short, if your charging port isn’t on a side it’s unlikely to reach. And if your port is on the “other” side to a Tesla it might be hard to park up if they’re nearly full as you might need to go into the wrong bay for the unit that’s free.
Gmail has a cool feature where it can understand important emails and put things in your calendar.
Except it also does it with spam too…
Although I think this is a mistyped email address, or it’s a very elaborate scam… It’s hard to tell, I’ve had this email address for so long now I can no longer tell the scams, spam and other Internet Herpes from the idiots who can’t spell. I keep getting mail from some church in Australia, I’ve been getting that mail for the past 10 years. I could have said “sorry but you have the wrong person” but it’s a bit late now.
This airline one though, seems legitimate somehow.
There was a “manage booking” button on the email too, but it needs a login.
So I thought I’d do some research… But I was wise, I know ChatGPT has a bit of a habit of making up stuff. So I thought “I’ll get you… tell me where you get your information from”.
I know, “interesting” and “COBOL” is a big ask…
None of the URLs actually work. They’re all 100% fake. The second response is quite good though – “I made up URLs that look like the kind of URLs you should be looking for when researching this stuff”. So it knows what a URL is, and treats it exactly the same as written text – “you want to know about Cobol? Here’s some words that people string together when talking about this”.
It does this with code too – “when people write database apps, this is the pattern they all seem to follow. You should go look for code that looks like this…”
It’s not giving answers, it’s giving us the shape of what an answer looks like, so when we go and search the web ourselves we know what to look for. It’s drawing the perfect looking but false McDonald’s burger you see on the advert, so that when you get the crushed slop in a box they really serve, you can recognise it.
This folks is why we’re trying to stop kids from using ChatGPT and friends in their work. It generates plausible looking nonsense.
Life must suck as an English teacher, since they’re trying to teach kids how to write their own plausible looking nonsense. “Write me a story that contains a badger, a horse and a trip to the moon”. ChatGPT could do that well, it’d be hard to tell that from a human made up story.