20 Comments

I think lying and self deception operate on several different layers, so would be interesting to see how many layers a AI could peel back, I think on some fundamental level the human mind lies about almost everything as per evolution intended. The main problem would be how you test the deeper layers to feed back into the training loop, but I think it can be done or bypassed.

Expand full comment

This is never going to happen: "politicians could be subject to lie tests during campaigns to ensure they were honest". The resistance will be so fierce that no amount of public pressure and scandals will be able to force this into legislation. Even if a revolutionary takes power from a corrupt government somewhere and promises to do such testing in future, it will be quickly buried or watered down to an extent that people in power will be able to live with it. Honest characters just do not climb the political ladder in any contemporary human society.

Expand full comment

I think the most likely result if we produce a lie detector that is 95% accurate is that everyone will behave as if it is 100% accurate and the more it is adopted the worse the results.

The second thing that will happen is that it will very quickly be retrained so that it no longer works, probably to eliminate racism, and will be used selectively, probably through interpretation by some expert, to increase the power of the institution administering the test.

Expand full comment

There are an awful lot of laws that nobody (or almost nobody) really wants enforced. Some of them are leftovers from moral panics, others are effectively bricks that authorities can selectively throw at people they don't like.

But if *all* laws were strictly enforced, society would fall apart instantly.

Expand full comment

The right to remain silent would become more important. Perhaps it should be applied to non-criminal cases.

Expand full comment

I think this sounds like a nightmare. If we could start with leaders and work our way down to common people and criminals, it may be worthwhile because we could quiz leaders and do away with political prosecution. We could also ask attorneys and judges before trial if they have any dislike of the criminal or their beliefs before a trial. Basically, if we didn't test power every time first, we'd all be in trouble.

Expand full comment

You know I'm kind of surprised none of you IQ researchers have looked into how the vaccine will affect IQ scores in the future.

Deaths are up. Many people's immune systems are malfunctioning.

Even if you don't believe it has any effect; let's go with the presumption. People that read news as opposed to watch were less likely to take it. People that can see through propaganda were less likely, as were right leaning people. What effect will this have in the not-too-distant future Emil "135" Kirkegaard?

Expand full comment

> With this in mind, you might ask: is the technology worth it?

This is always a cruel question to pose - so what if the answer is "no?" It's a big world, and someone out there will still pursue it anyway.

The trick (if there is one) is to ask instead, Is there another technology that can soften the impact? Traditionally, circumstances that make people more stationary help dictators, and circumstances that allow movement help individuals. As long as it's possible for subjects to flee, and, as long as there's somewhere for them to flee *to,* totalitarianism can't completely dominate the future.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

terrible idea, sorry

100% accuracy is unproveable

compelled submission to a machine to judge you is testifying against yourself

Expand full comment
Mar 30, 2023·edited Mar 31, 2023

For the use case of prosecuting crime, I wonder how useful lie detectors will be when the crime occurred while the defendant is high on drugs or alcohol or both. For crimes like rape or assault, a lot of times the defendant is on drugs or drunk. Can the machine detect a "lie" when the defendant doesn't even remember what happened?

For white collar financial crimes, this is definitely useful though.

Expand full comment

Interesting article, but no need to spread anti-China nonsense. The guardian is hardly a reliable source.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

Given that all facial expressions can be put under voluntary control (as taught and learned in acting school), the data flow could be corrupted by so many false positives as to disrupt the net signal. I suspect that the same approach could be done for MRI data points, perhaps by thinking of something truly nasty that you did and are ashamed of. It's likely that they'll still have to beat the confessions out of us to make us stop mentally resisting the AI.

Expand full comment