For myself, I haven't tried to use ChatGPT yet. I have tried Google's version and it is not very good.
However, over the last year my perspective has shifted from, "'AI' is overhyped" to, "oh, wow, I don't have a good sense of how powerful 'AI' is right now, but I am convinced that the evolution over the next decade will be one of the most important stories of my lifetime."
ogged's take seems reasonable. The thing I get caught up on with all the hype is that these things are only actually useful if their output is both predictable and controllable, and that doesn't seem to be the case consistently at this point. Maybe it will be at some point.
The garage door thing is genuinely surprising and useful and maybe a counter to this that I read earlier that I thought was pretty good https://www.wheresyoured.at/peakai/
That's a very clever Twitter thread, and I mean that in an entirely derogatory way.
The first post links to an authoritative (and, crucially, very long) article from a reliable source - Yahoo Finance. That's called "establishing authority". You might well click through out of curiosity, but it's a very long article, so you won't read all of it. But this tweet gives you a vague feeling that the tweeter is being honest. At least he's linked to a reliable source.
The second tweet has a screenshot of a passage from that article. If you are a bit suspicious, you might click through to check - and you'd find that yes, indeed, it is an accurate and untampered screenshot. Now you're thinking "OK, Jerrick White seems like a reliable person."
The third tweet does the same thing - and, again, it's an accurate and untampered screenshot. A really suspicious person might check that too - and again they'd be reassured.
You are now really pretty convinced that this is an honest, truthful person writing.
Hold onto that, because the fourth tweet has the payload in it.
The fourth tweet doesn't have a screenshot - it has three quotes from the article.
Barnett's lawyer described him as "really happy to be telling his side of the story" hours before his death.
This is an accurate quote from the article. Here's the full paragraph:
The previous day, Barnett had been on a roll as a video camera recorded the event. "John testified for four hours in questioning by my co-counsel Brian," says Turkewitz. "This was following seven hours of cross examination by Boeing's lawyers on Thursday. He was really happy to be telling his side of the story, excited to be fielding our questions, doing a great job. It was explosive stuff. As I'm sitting there, I'm thinking, 'This is the best witness I've ever seen.'" At one point, says Turkewitz, the Boeing lawyer protested that Barnett was reciting the details of incidents from a decade ago, and specific dates, without looking at documents. As Turkevitz recalls the exchange, Barnett fired back, "I know these documents inside out. I've had to live it."
Barnett told family/friends "If I die, it's not suicide."
This is the payload. This appears nowhere in the Yahoo Finance article, and there's no other source given. It is also by far the most explosive part of the entire thread and, naturally, it's completely unsourced. Search the article if you like.
But, four tweets into a thread which has so far been an entirely accurate summary of the article, it's very likely that you wouldn't notice that. You'd take it as gospel and spread it, which is exactly what Jerrick White wants you to do. You get dogs to swallow a pill by wrapping it up in steak; you get people to swallow a lie by wrapping it up in truth.
That piece is one of the things I was reacting to!
That piece doesn't include anything about Barnett telling anyone "If I die, it's not suicide", though. Have a look.
What it does include is Barnett's family saying "this guy had post-traumatic stress disorder and anxiety attacks, which is what we believe led to his death".
A torsion spring isn't exactly a suble thing. It's a few feet long and they come in pairs so unless they both break, you'll see a difference.
Sorry, 5 was to 3.
On the Boeing guy, this local report seems to be the source of the quote. It's local news, so who knows, but the tweeter didn't just make it up.
To paraphrase Henry Farrell, it may be like a microwave, something that does a few things well that were much more difficult otherwise, but nothing like the omnitool originally billed.
I knew nothing about garage doors when my torsion spring broke, but it was pretty obvious when it broke. The only surprise was how heavy the garage door was with the spring broken.
ChatGPT is useful for generating boilerplate code-- here's a list of fields, I'd like a class with getters and setters and error handling.
We may have talked about it here already, but I see it as something pretty interesting for making bottomless pools of soap opera plots, choose-your-own-adventure games, and the like. These generated entities are interesting to me- approximations of our verbal culture. Open world video game walkthroughs seem like related entities, and people like those similarly to films. There's a paper (arXiv, not reviewed) that I liked that considers what kind of knowledge is necessary to do this; it's not recognizing truth or deep insight, but what it can do for narrative is somehow similar to doing those. The authors cite Zellig Harris, Chomsky's advisor. I think I've mentioned this here before, apologies if I'm repeating myself.
https://arxiv.org/abs/2310.01425
Someone has figured out how to use AI to call me six times a day to try to sell me fraudulent Medicare supplemental insurance. I would rather destroy AI than give up my phone as a way for someone I don't already know to contact me. And I'm pretty sure that's the choice.
It's local news, so who knows
So it wasn't "he told family/friends" - Jerrick White made that up. It was one woman, full name unknown, who says her mother was a friend of Barnett's mother, contacted local news to say that she happened to run into him a month ago and he said, to her and to no one else, "If I die, it wasn't suicide".
vs
Dude's entire family who said "this had been a hellish ordeal for him and he had anxiety and PTSD that led to his death".
I mean, come on.
The frustrating, but unsurprising thing, is that ChatGPT isn't actually great at writing essays, but it's really good at code, which is why it's the death of the humanities and not C+ junior developers or something. It's a massive pedagogical challenge but once that thing can write assessment reports it will be my friend.
The Calabat likes videos where ChatGPT plays chess remarkably well until it takes its own pieces.
I see Ajay is in the pocket of big airplane. I discount the family's story insofar as they have a much better chance of recouping damages from saying Boeing gave him PTSD, versus saying Boeing had him killed. Also, his lawyers are shocked, but that's also typical in cases of suicide, so again, not dispositive. Maybe the local lady is lying. Possible! Also possible she's telling the truth. I'm not just trying to play conspiracy theorist; it genuinely feels murky to me, as these things often do. And I genuinely wish I had all day to argue (it's not Kate!) but I don't.
once that thing can write assessment reports it will be my friend
Oh, another thing that they're apparently very good at: HIPAA'ed versions are recording and summarizing physician visit notes.
I see Ajay is in the pocket of big airplane.
Please remove all articles from the seat back pocket before deplaning.
Your garage door experience is very super extremely not what has ever happened when I have tried to get ChatGPT to assist me in getting actual information about anything ever. But congratulations!
I have no opinion on what really happened to the Boeing guy, but "If I die, it's not suicide" is the kind of thing that a guy seriously considering suicide might say, if he's concerned about his posthumous reputation, or about his family collecting on his life insurance (in fact, life insurance generally pays if the decedent held the policy for more than two years before suicide, but most poeple don't know that (in the U.S) (not legal advice)). Also the kind of thing a friend of the deceased is likely to say that he said.
I can probably fix a garage door for you if I'm in Cleveland.
"I discount the family's story insofar as they have a much better chance of recouping damages from saying Boeing gave him PTSD, versus saying Boeing had him killed"
This is, and I mean this entirely seriously, the sort of thing that a sociopath would say. "Of course the dead man's family are going to lie about the death of a loved one if it marginally improves their chances of financial gain. Who wouldn't?"
Yes, 15 would make sense if the family had a whole legal and PR team on call.
As with MOOCs, I'm skeptical of the long-term business models for a lot of the proposed applications. "Look at how many people we won't have to hire!" is appealing to a certain organizational leadership type until "How did the hell did that end up being so expensive?" becomes the harsh reality.
Also, I don't get chat interfaces for things that don't seem like they'd be done in chats if you didn't have a chatbot in front of you.
Also, how the hell are there so few Google results for "chatbotage"?
However! Whisper transcription (also an OpenAI product) really is very good and makes various things in my work life a lot easier and better. I'm hoping there will soon be more options to hook it into speech-to-text for good direct dictation workflows, too, because our dysgraphic kid finds the current state of the art for dictating papers (etc.) still too clunky and unpleasant to use.
I tried to use ChatGPT to write me an introduction for a speaker based on their biography. It was at least a starting off point, but it definitely read like a junky bot webpage.
It's rather frustrating that nothing free on desktop is as good as my Android's transcription.
I'd rather have a chatbot in front of me than a skilled surgeon's lobotomy.
The problem with torsion springs is if they break, they break with a great deal of force. The new ones have a cable inside to keep the break controlled. Back in the day, you used to just lose your head or damage your car.
Maybe that's what happened to Middleton? It would be irresponsible not to speculate.
Kate was the torsion spring of Boeing.
in fact, life insurance generally pays if the decedent held the policy for more than two years before suicide, but most poeple don't know that (in the U.S) (not legal advice)
I took out a life insurance policy after Elke was born, and the investigator who came to interview me decided to give me a pep talk about suicide at the end. "I've interviewed a lot of people, and in all these years I've only met one who really didn't have anything to live for." I assumed that one person wasn't me, but I don't know for sure.
Relevant to 18, Dan Luu makes the point here that LLM "hallucinations" really aren't that much different than the shallow, garbagey results you get from Google for many practical queries these days. Like, querying "how to fix a garage door" might link to a video showing you how to check your torsion spring (actually it kind of looks like it does?) but it's just as likely to send you to an infinite number of content farm sites that are morally (or actually) equivalent to a ChatGPT session. (An actual ChatGPT session is exactly what Quora has started doing lately, which is incredibly annoying and unhelpful. I mean, yes, Quora is shit, but it used to be marginally better than the average content farm and now it's not.)
IME, the Google search of the archives here tends to do things like, for a search on "Uganda," returning all hits for "Africa." (If the rest of you aren't getting stuff like that, maybe it's a weird problem on my end, but I find it kind of maddening.)
I have whatever kind of life insurance you can get without talking to someone about my feelings.
The family is most likely right re: the Boeing guy, but it does have a Michael Clayton vibe.
https://www.youtube.com/watch?v=NYknJmoDDPs
I think my life insurance expires the same year my son should finish college. So my wife isn't tempted to pick up some floozy.
16: I acknowledge its usefulness. I fed it a bibliography that was in the wrong format due to some insane house style and made it fix it. But I think it is telling that "I'm going to feed it to ChatGPT" or some similar variation is what academics say when they don't care about the writing.
There was a horror story on Bluesky of someone with students whose work quality suddenly plummeted, and he discovered they had been told by other instructors to use ChatGPT to sound whiter. Interestingly, it had in fact worked with thinks like email requests to professors, where it was all about the style, but very much not for academic coursework.
12: YES! I hope it can be more useful than that, but this is my deep-seated fear. Also that personal connections and relationships in work will be even more devalued. You don't need a real human doctor to accompany you through illness, just a bot. Personalized, personal service will be a luxury product available only to professional sports players and rich Senators.
Another horror story waiting to unfold.
As the kids say, cringe:
Is this thing legit? How accurate are we talking? Break it down for me, nerd style.
Pretty darn accurate! Not perfect, but it's the next best thing to a lab test for a quick check. Powered by patented HeHealth wizardry (think an AI so sharp you'd think it aced its SATs), our AI's been battle-tested by over 40,000 users, hitting accuracy levels from 65% to 96% across various conditions. However, dive into the nerdier deets and you'll see that things like your selfie's lighting, the particular health quirks you're scouting for, and a rainbow of skin tones might tweak those numbers a bit.
Can I use Calmara on other area other than penis?
While you might be curious about using Calmara for more than just peen checks, it's really in its element when focusing on the D. Its genius brain isn't quite tuned for other zones like the balls, booty, or mouth, meaning it might miss the mark accuracy-wise. We're all about sticking to what we know best, so if you're noticing anything sus elsewhere, it's a solid move to reach out to a health pro for the full lowdown.
Considering reporting to the FDA personally just to be safe.
It's asking for a pic of your junk? That's got to be satire
I wondered, but I found a closely related company called HeHealth that's been around for 3 years, and their founder's LinkedIn post seems pretty sincere.
Cringe aside, the fact that they seem to be carefully limiting the scope of their claims suggests that they are in fact sincere.
I can't access the link because my work blocks recently registered domains so just going by the excerpt in 46.
I think generative AI is going to end up being as consequential as the web is.
I loathe that gen AI models were trained on my words and images and those of millions of others without our consent. It feels horrendously invasive, like finding out that someone has been following you around for years with a voice recorder and has used your voice to build a weapon that will inevitably get used against more vulnerable people. I especially loathe that image generation is being touted as AI being "creative," when in reality it could not meaningfully create without all of those source images that it stole.
I am excited by the cool stories of AI usage, although most of them seem to be other types of AI (not generative AI) -- things like better detection of tumors in radiation images, better prediction of which roads will need pothole maintenance, better predictions of animal migration patterns and weather impacts.
I am downright horrified by the cavalier way that colleagues in my field admit to using generative AI. They seem to have no conception of the difference between low-stakes and high-stakes usage. Asking a gen AI model to suggest family vacation ideas? Sure, why not. Asking it to draft public policy language? Are you insane?
Someone I otherwise respect was excited about using AI to tell you if a person qualifies for food stamps. In a situation where someone could wind up committing federal fraud or being deported if the software gets it wrong? GOOD LORD ABOVE.
52: if you did that food stamp queru in healthcare using chatgpt as opposed to some kind of proprietary system, you would be violating HIPAA privacy rules.
no conception of the difference between low-stakes and high-stakes usage
Yes! I think this is also implicitly thinking of it as intelligent, but instead of saying it sucks, thinking it's awesome.
Funny, but how is this a business? What's the average value from getting someone this gullible to download an app, maybe $1? In-app ads for what, supplements or gadgets maybe? If 1/1000 buy the crap, that's 40 sales so far.
I also really, really dislike that nearly every Zoom call I join now has 1-5 AI notetakers silently recording everything. I don't know how to let people do this for bona fide accessibility reasons and forbid others from doing it, so in practice it ends up with everyone being allowed.
So now there are all of these somewhat-accurate transcripts with even more vaguely accurate summaries floating around, that people are going to refer back to in a month or a year as an accurate rendering of the meeting.
And the thing, they AREN'T. Which is really, really important when you are having meetings about consequential things that affect people's lives and livelihoods. I don't have time to fact-check all of them but when I do I'm inevitably appalled by the ways AI misunderstands people's accents, idioms, or vocabulary; its near-total inability to accurately decipher when a group of people has actually come to a decision;* and its hilariously overconfident wrongheadedness in eagerly summarizing the most "important" parts of the meeting.
*This is genuinely hard! PEOPLE struggle to do this! But the software is much worse than the median human, imo.
I think this is also implicitly thinking of it as intelligent, but instead of saying it sucks, thinking it's awesome.
People really, really, REALLY struggle with understanding that there is no correlation between the educated grammar/syntax of gen AI output and any underlying accuracy to the information.
I get it; it flies in the face of what we've learned from thousands of years of trying to gauge the accuracy of human language and human intelligence. But it's frightening.
53: Exactly! And further to that: for many gen AI models, if you feed a question into them, you are giving them ownership (or at least usage rights) of your data to further train their model.
What happens when enough teenagers in Texas start asking chatGPT how to get abortion medication, and then some enterprising DA decides to ask the same chatGPT model to spit out names or identifying details related to teens and abortion? Do we want to gamble their safety on the chance that the software developers have the right safeguards installed such that chatGPT won't just spit out some incriminating data?
53: Exactly! And further to that: for many gen AI models, if you feed a question into them, you are giving them ownership (or at least usage rights) of your data to further train their model.
What happens when enough teenagers in Texas start asking chatGPT how to get abortion medication, and then some enterprising DA decides to ask the same chatGPT model to spit out names or identifying details related to teens and abortion? Do we want to gamble their safety on the chance that the software developers have the right safeguards installed such that chatGPT won't just spit out some incriminating data?
49: I'm sincere on LinkedIn. That proves nothing.
Funny, but how is this a business?
Imagine millions of dick pics, all with ToS-gifted data and metadata on their owners or owners' partners. Pretty monetizable.
The internet was fun for a while, but it's obviously past time to nuke it and start over.
Maybe they could do like Hot or Not, except Diseased or Not for genitals. Not medical advice, just crowdsourced wisdom.
The Boeing thing still really bothers me. I usually fly Southwest.
I can definitely see the appeal of using an app instead of having to wear a jimmy hat. Is being able to do that worth trading away pictures of your junk? Maybe!
The app that does for syphilis what Ron DeSantis did for measles.
One of my kids' docs just told us she's switching portal/billing/messaging services because the current one informed her that AI will soon be training on all the content.
34- I recently read that you can change a setting and it will switch back to how it used to behave. Something like "verbatim results" under the settings menu.
I forgot the exact procedure, maybe I'll google it.
My son says all the kids use Duck Duck Goose or something.
Belatedly: the Dan Luu link in 33 is excellent. I don't know that I agree with every single one of his points, but the overall piece is very strong.
And the phenomena he's describing are utterly familiar to me as both the 'accidental techie' in most of my workplaces over the past 30 years and as a librarian. People who don't think Google search results have gotten appreciably worse in recent years are (in my experience) bad -- that is to say, entirely average -- at using the web in general and at distinguishing between legitimate vs scammy results in particular. Google has gotten much worse, and in ways that drive ordinary people to scams and unhelpful results much more effectively.
Is there anything to the idea I've seen that chatgpt and others are ridiculously subsidized with VC money, and when that's gone costs for the user (paid somehow) are going to go way up?
70: oh I'm sure. I resent it, though.
72: Very strange headline, but ends with the elephant in the room:
Armenia... is nominally a Russian ally though its relations with Moscow have deteriorated in recent months over what Yerevan says is Russia's failure to protect it from Azerbaijan.
As a result, Armenia has pivoted its foreign policy towards the West, to Moscow's chagrin, with senior officials suggesting it might one day apply for European Union membership.
In a statement posted on Tuesday on the Telegram messaging app, Russian Foreign Ministry spokeswoman Maria Zakharova suggested Yerevan's deepening ties with the West were the reason for Armenia having to make concessions to Azerbaijan.
Am I avoiding a ton of shit curious enough to find the exact wording? Yes! I won't link it, but it's three sentences and the middle one is machine-translated as "Please note: this statement [by Pashinyan] is in no way connected with Russia." The Reuters summary is accurate.
I don't see it's strange, apart from being in the "Asia-Pacific"
Dear lord https://x.com/presidentaz/status/1769998494196965516?s=46&t=nbIfRG4OrIZbaPkDOwkgxQ
||
Did Tommy Hilfiger go for a vaguely Confederate logo deliberately?
|>
Not impossible, but I doubt it -- a graphic reference to the national Confederate flag as opposed to the battle flag is a deep cut, that wouldn't even work as much of a dog whistle, especially decades ago when the logo was chosen. I would be really really surprised if that were the case.
I doubt it. Red, white, and blue are in lots of flags. Only the battle flag (with the crossed blue bars with stars) is really ever used by modern assholes.
The political Confederate flag would look stupid painted on a muscle car.
72 et seq.: He's going to get war anyway. I'm a little surprised that AZ hasn't tried already, but it's been less than a year since they took the rest of Karabakh, so I guess they're re-stocking their drone supplies and working to replace however many men they lost.
The geography isn't going to change and the correlation of population/wealth isn't going to change. Southern Armenia is sparsely populated and connected to the northern part of the country by very few roads. Maybe the Armenians can hold these; maybe a small amount of Western aid can help them do that. (I think I read something about France opening a consulate in southern Armenia, and maybe there would be French forces as well? I dunno.) Large-scale Western military assistance is off the table as long as Ukraine is still fighting. I don't think the Armenians have enough armed forces to threaten a counterattack on Nakhichevan, but I could be wrong. At any rate, the northern approaches there are flat by local standards, could be an AZ vulnerability.
There are more Azeris in Iran than in Azerbaijan, so I guess we should ask Ogged what Tehran thinks about an expansive and militarily aggressive Azerbaijan.
I guess the Armenians won immigration since they are in California.
62. Tim Berners-Lee seems to agree with you.
33,73 Dan Luu is fantastic. I like everything Lucene-based that I've touched, his offhand comment there seems right. I didn't know that he had written longer-form essays, more to read there.
||
Opus is magnificent. I think literally perfect, and perhaps a new (to me, at least) genre of film to boot.
|>
Musicians aren't actually penguins, Moby. They just look that way becuase of the tailcoats.
https://www.rogerebert.com/reviews/ryuichi-sakamoto--opus-2024
And extremely worth your while to see in theater.
The last movie I saw in the theater was awful so I'm not eager to go back.
Seeing an awful movie in theater is like getting thrown by a horse. If you don't get back on you'll never experience the perfect joy of driving your lance through a fleeing orc.
Speaking of quackery, is this real, chat? It seems fake, but there's some collaboration with the state public health department, and this JAMA letter with results.
I tutor a young person who has one of those service dogs! Her parents are pretty on the ball and science minded, so I doubt they'd be on board for something that was sketchy.
Honestly, the dog would be worth it for the emotional support alone. She is incredibly bonded to him.
Pretty sure cats can smell some upper respiratory infections. Sensitivity might vary from cat to cat though.
Right. Everyone has heard of a cat scan.
Calmara has responded to the negative press, also on LinkedIn.
Calmara is not a silver bullet and nor are we trying to say that we are. But to anyone who wants to know more: our AI functions like how a visit to a doctor is like. Diseases like syphilis or herpes have very characteristic visual presentation, and our AI can detect them very well. And....our AI has seen more cases than any doctor possibly can. 😉
Somehow I don't think they have a US regulatory specialist on staff or contract.
Maybe they could train a dog to review people's junk?
They sniff crotches with no training.
91 I really want to see that. I met him around 2013-14 when there was a Tsai Ming-liang retro running at the Museum of the Moving Image and I saw him at almost every screening. He subsequently did some music for one of his films. Super cool guy and an incredible composer.
73: I think it's true that Google has gotten worse, but I think it's underrated that the web has gotten worse and Google has largely just failed to keep pace with the firehose of shit it's indexing.
107: There's a chicken-and-egg thing though, because a lot (most?) of that firehose of shit was crafted to vacuum up Google traffic specifically.
79: ISTM normally this gets reported as "Azeri PM says Armenia must return disputed areas or face war" -- it takes a while for the article to get totally clear on the 5 W's of the war threat, since there are so few direct quotes from Baku. The effect is a bit like Pashinyan holding up a hand puppet, as I assume he did.
Arizona (AZ) declaring war on Armenia seems like taking California-bashing maybe a step too far, but I would say that.
That's a long way to go for Colorado river water.
109: Oh.
86: You think Arizona'sAzerbaijan's objectives are that expansive?
This is a pretty good rant about some use of AI to generate poor quality content at large scale: https://www.theintrinsicperspective.com/p/here-lies-the-internet-murdered-by
112: A land bridge to Nakhichevan and however much of southern Armenia comes along with that? Yes.
State propaganda about "Western Azerbaijan" and working to pretend that Armenian monuments throughout the region are Albanian (not the Hoxha kind of Albanian) are signals that more war is on its way. Aliyev's legitimacy is based on oil money and conquest, and those are seldom areas where the leader says "ok, we've got enough." Overreach is the classic failure mode there, but how much reaching will happen before the over kicks in?
Neither the West nor the Armenians probably have the connections to do it, but this might be a good time to foment unrest among Azeris in Iran. If Tehran thinks their provinces are going to be next, they're less likely to ignore, or support with weapons sales, an Aliyev land grab along their border.
There's another kind of Albanian?
Oil-wise, some bits and pieces.
114.2. Point. But OTOH Azerbaijani* policy to date has as far as I've noticed been exemplary** in its patience and methodicalness***. Threats could be consistent with limited aims (say 3 small corridors, 1 big) to the exclaves, and a permanent peace (for which they seem to have a willing counterparty in Pashinyan).
Invading Armenia proper would be different in that it would violate a recognized border and trigger the CSTO, and Aliyev has to know it. There's also the possibility, unlike in Ukraine, of direct Western (let's be honest, US) intervention; and IDK but would assume such intervention could stop Azerbaijan dead.
(Selfishly, of course, everything says throw Armenia under the bus: call Russia's CSTO bluff (or open a second front that would help Ukraine); bring Armenia into the US orbit, with bases to discomfit Russia and Iran, and provide alternatives to Turkey; open pipeline routes that bypass Georgia.)
*Is there meaningfully an Azerbaijan apart from Aliyev? IDK, but the (AFAIK) smoothness of the succession suggests some successful institution-building.
**In a non-normative Machiavellian sense.
***So clunky. There must be something better.
There's another kind of Albanian?
There is! Caucasian Albanian, which is not all that helpful as a descriptor, but there we go. There's also, historically speaking at least, Caucasian Iberia, just so that things don't ever get simple.
Is there meaningfully an Azerbaijan apart from Aliyev?
TIL that his wife is vice president, so I'm guessing no, not really. Or that whatever is, is so opaque that I would have to pay a lot more attention to have any idea. On the other hand, she's apparently been VP since 2017, so it's not like I'm paying a super lot of attention anyway. Just opining, in time-honored blog fashion, based on old expertise, general political intuition, and eyeballing maps.
Yes, Aliyev has been more patient and methodical than I would have expected, given petro-dictatorship and nepo dude. And he's only 62, so cognitive decline is probably still a good long way off.
violate a recognized border and trigger the CSTO
I think the CSTO is a dead letter, and the Kremlin has more in common with AZ anyway. Could be wrong of course.
Trying to occupy the Aras valley and establish a land corridor to Nakhichevan (the other three exclaves are barely worth mentioning, as is the Armenian exclave that nobody seems to mention) would be tantamount to invading Armenia proper and/or a pretext for same.
116 last: I was unclear. I meant threats as part of a negotiating strategy with limited objectives..
113: Thanks. That was fucking horrifying. I've been blasé about AI's effect on my work despite being a writer partly because I see a lot of shitty writing generated by actual humans, but that's a good story about just how bad it can get, as is the Futurism article it links to.
The Caucuses and the Balkans have a lot in common, including Albanians I guess.
The Balkans have had a population of US Midwesterners since NATO intervention in 1995, and a similar population is also found in the Iowa Caucasus.
US just called for an immediate ceasefire in Gaza. Russia and China vetoed it.
https://apnews.com/article/united-nations-us-vote-gaza-ceasefire-resolution-f6453803b3eacc9fbaa2ce5a025e2a94
It fell short of demanding an immediate ceasefire:
"(The Security Council) Determines the imperative of an immediate and sustained ceasefire to protect civilians on all sides, allow for the delivery of essential humanitarian assistance, and alleviate humanitarian suffering, and towards that end unequivocally supports ongoing international diplomatic efforts to secure such a ceasefire in connection with the release of all remaining hostages..."
I guess after the recess, we get another vote on who is Speaker of the House.
123: true. It said "it is imperative that this happens immediately and we support the efforts to make it happen immediately" which is different.
Calmara has presumably talked to someone with regulatory knowledge as they have changed tack to insist they are "a lifestyle product, not a medical app".
Sending pictures of your penis into web apps is a lifestyle.
Too much curve is a medical thing.
If I want to do roleplay in the bedroom revolving around having tech that doesn't actually exist, there are much more creative options. Warhammer 40k comes to mind.
"Of course you can guess what happens next."
"He reconsecrates the Terminator armour?"
"Don't be fatuous, Jeffrey."