[ rss / options / help ]
post ]
[ b / iq / g / zoo ] [ e / news / lab ] [ v / nom / pol / eco / emo / 101 / shed ]
[ art / A / beat / boo / com / fat / job / lit / map / mph / poof / £$€¥ / spo / uhu / uni / x / y ] [ * | sfw | o ]
logo
news

Return ]

Posting mode: Reply
Reply ]
Subject   (reply to 16782)
Message
File  []
close
preblue.jpg
167821678216782
>> No. 16782 Anonymous
4th December 2018
Tuesday 12:03 pm
16782 UK police wants AI to stop violent crime before it happens
https://www.newscientist.com/article/2186512-exclusive-uk-police-wants-ai-to-stop-violent-crime-before-it-happens/

>As for exactly what will happen when such individuals are identified, that is still a matter of discussion, says Donnelly. He says the intention isn’t to pre-emptively arrest anyone, but rather to provide support from local health or social workers. For example, they could offer counselling to any individual with a history of mental health issues that had been flagged by NDAS as being likely to commit a violent crime. Potential victims could be contacted by social services.
Expand all images.
>> No. 16792 Anonymous
4th December 2018
Tuesday 8:13 pm
16792 spacer
I wonder if we'll all get a visit from social services and barred from 'spoons on the same day.

How many times now have we had "X was already known to police and social services" now?
>> No. 16793 Anonymous
4th December 2018
Tuesday 9:05 pm
16793 spacer
>>16792
236 times.
>> No. 16796 Anonymous
5th December 2018
Wednesday 1:31 am
16796 spacer
It's not entirely bonkers. Chicago have tried something similar and it seems to be working.

https://www.economist.com/united-states/2018/05/05/violent-crime-is-down-in-chicago
>> No. 16797 Anonymous
5th December 2018
Wednesday 5:29 am
16797 spacer
>>16796

They 'think' predictive crime software has made a difference. That's not convincing me. Even if it does work, I'm not sure it's worth it. Certainly I do not in any way trust our own government with this sort of power. They already quietly abuse things like ANPR to harass people who are too friendly with known criminals, and personally I think that's fucking disgusting and makes me doubt that 'being contacted by social services' would not be the endgame of this idea. I'm all for pre-emptive solutions, and I'm, certainly in favour of proactive policing, but I really don't think you need to profile and track people to know that you might need more outreach in the ghetto.
>> No. 16798 Anonymous
5th December 2018
Wednesday 9:36 am
16798 spacer
Anything that takes the actual police out of the equation is a win in my book. They are not mental health support and when they act as such people get hurt.
>> No. 16799 Anonymous
5th December 2018
Wednesday 10:42 am
16799 spacer
I think this plan tackles the problem from the wrong end. There's obviously a whole host of civil liberties issues involved if you say X or Y must be put under close watch just because a computer algorithm said so. As it stands, that person will not have committed the crime that computers think they are likely to commit. And not even the smartest software you will come up with can change that fact.

Our legal system largely functions on the assumption that a person is an innocent citizen until they actually go and commit a crime which can then be proven in a court of law. For authorities to take action against you before you have committed the crime, it must be immediately obvious or at least very highly likely that you were directly about to do something illegal. For example if you are a member of a daft militant wog group and through intelligence, authorities have been able to gather that you and your associates are planning a daft militant wog attack.

But just because somebody has a history of physical violence for example, it doesn't mean that the government should have the right to assume, and based on computers and artificial intelligence, that you are going to offend again. You should be given the benefit of doubt until there are immediately obvious signs that you are going to become violent again. This is a hugely different approach than saying they need to be monitored just because a computer thinks so.

I think the answer is yet again good social work. Have social workers and police go into schools and educate people properly. Offer them opportunities in life that will make them less likely to become criminals. And educate and empower the socially vulnerable. That way, you are able to address social problems without suspecting so-far innocent citizens just because they are statistically likely to do illegal things.

And also, don't underestimate the mission creep of such schemes. They are now saying they're just going to use it on would-be criminals and would-be victims. But once you have an AI framework like that in place, who's to say that it's just going to be limited to these types of individuals. What if the council just thinks they need to get people to put out their wheelie bins on time, and uses AI to spot you as somebody who appears likely to keep forgetting about it?
>> No. 16800 Anonymous
5th December 2018
Wednesday 11:30 am
16800 spacer
>>16797
>They already quietly abuse things like ANPR to harass people who are too friendly with known criminals
Oh, it's you again.
>> No. 16801 Anonymous
5th December 2018
Wednesday 1:46 pm
16801 spacer
>>16800

Sorry? I don't understand.

It's a real thing, though.
>> No. 16802 Anonymous
5th December 2018
Wednesday 2:04 pm
16802 spacer
>>16799

>I think the answer is yet again good social work.

How do you target that social work though? We can't give everyone a social worker. We can't make every school a priority for violence-reduction programmes. Either you have an algorithm for figuring out who is at highest risk of becoming involved in violent crime, or you rely on guesswork, which all too often boils down to prejudice and bigotry.

Using AI doesn't change anyone's legal rights, but it does potentially offer us a fairer and more accurate way to target our resources.
>> No. 16803 Anonymous
5th December 2018
Wednesday 2:09 pm
16803 spacer
>>16802

>Using AI doesn't change anyone's legal rights

Only, in my opinion, if law enforcent remain blind to those flagged by the AI. But I don't believe for a second the rozzers wouldn't be able to see your precrime percentages, and even if it was claimed that any flag would be for social services eyes only, I'm not sure how long that would last.

I realise that's simply the 'slippery slope' argument, but I think it's applicable here.
>> No. 16804 Anonymous
5th December 2018
Wednesday 2:16 pm
16804 spacer
>>16803

There's also the issue of how much we trust the government with our data. How many data breaches have we had in the last decade? I understand they're getting better at not leaving drives on trains, but once your "79% thief" score is leaked, good luck getting that back in the bottle.

It's also easy to paint this as a first step towards a China style social rank system. I don't personally think that's the end game for our society, but I would say this sort of scheme would never be rolled back. Even if it successfully reduced crime, the only 'next step' is edging the scope more and more.
>> No. 16806 Anonymous
5th December 2018
Wednesday 3:25 pm
16806 spacer
>>16802

> Either you have an algorithm for figuring out who is at highest risk of becoming involved in violent crime, or you rely on guesswork, which all too often boils down to prejudice and bigotry.

Not really. You can still send social workers into known "trouble areas", and then tackle the problems after an actual person has made an assessment of what's wrong in that area. And if crime rates then go down, or up, then it's very likely you can count that as a success of your social work. It's not complicated. And really, what do you think is the better way to reach at-risk people. A social worker personally encouraging them to stay on the path of virtue, or a computer algorithm that says they need to be kept under watch?

Also, I don't buy the argument that maybe computers and AI will be more cost efficient. These systems cost huge amounts of money to develop, and then maintain and evaluate by specially trained personnel. And then you still have no guarantee that they will work as promised.

The only real value of these AI systems is that they allow politicians to do more grandstanding yet again. By having you believe that they will employ shiny new technology to get tough on crime. And the spiel then goes that if you doubt the wisdom of it, you will be accused of jeopardising public safety and the well being of the vulnerable. And these are usually the same people who, by cutting budgets for social work, have contributed to the problem in the first place.

Catching criminals is always much more media friendly than preventing them from becoming criminals in the first place.

And as has been said, what is the government going to do with your data? I'm sure a host of other institutions would like to know about it as well. Are they going to let your bank know that you have been calculated to be at risk of committing crimes? Your employer? And if indeed hackers manage to hack into those databases, what's going to happen? Will you be blackmailed? Will your risk assessment score be made public?

And you only need to look at the kind of social scoring that China has been rolling out. If people know that there is an AI computer framework that can label them a troublemaker, then a lot of people will avoid behaviours that they will even remotely feel could impact their database score.
>> No. 16807 Anonymous
5th December 2018
Wednesday 3:39 pm
16807 spacer
>>16805

>Not really. You can still send social workers into known "trouble areas"

How do you know those "trouble areas"? Either you're guessing, or you're using a formal methodology for identifying them. If you guess, you're highly likely to guess wrong, potentially in ways that are racist or otherwise bigoted. A formal methodology is an algorithm. If you're just calculating reported crimes per capita, that's still an algorithm. Using more sophisticated algorithms allows you to get more nuanced insights.

>Also, I don't buy the argument that maybe computers and AI will be more cost efficient. These systems cost huge amounts of money to develop, and then maintain and evaluate by specially trained personnel.

There's a spectrum from "Keith in the office did a spreadsheet" to "we gave Capita a billion quid". There are always surprises in the data, you just need a competent statistician to start finding them. If you hire the right people and have collected reasonably clean data, you can get actionable insights in a matter of days with a hugely positive RoI. It's not voodoo, it's not rocket science, it's just maths at scale.

>Catching criminals is always much more media friendly than preventing them from becoming criminals in the first place.

Preventing crime is much harder than it looks. A lot of our intuitions are wrong, it takes a long time to see the results of your work and we often fail to use rigorous evaluation methods.

A classic example is Scared Straight. In the US, it was quite popular to take naughty kids into prison to show them where they could end up if they kept acting like a prick. Intuitively, this makes a reasonable amount of sense - if you're a delinquent teenager who isn't scared of a bollocking, you might be scared of the possibility of ending up in prison. Nobody bothered to evaluate these programmes for years; when we did, we found that they increased the crime rate, probably because it took away the fear of the unknown. A lot of states are still using Scared Straight programmes, even though we know that they're actively counterproductive.

https://files.givewell.org/files/DWDA%202009/Scared%20Straight/Campbell%20Scared%20Straight%20review.pdf
>> No. 16808 Anonymous
5th December 2018
Wednesday 3:42 pm
16808 spacer
>>16806

>If people know that there is an AI computer framework that can label them a troublemaker, then a lot of people will avoid behaviours that they will even remotely feel could impact their database score.

This is the real issue. Would being identified at a protest count against you? Talking about surveillance on the internet? Using a VPN? Swearing near a policeman? I can't think of a better way to suppress people. Even if the intentions are good, the implications are not.
>> No. 16809 Anonymous
5th December 2018
Wednesday 3:43 pm
16809 spacer
>>16807

>There's a spectrum from "Keith in the office did a spreadsheet" to "we gave Capita a billion quid". It's not voodoo, it's not rocket science, it's just maths at scale.

Yes, but I don't think Keith is going to be predicting criminal tendencies in Excel any time soon, is he?
>> No. 16810 Anonymous
5th December 2018
Wednesday 4:24 pm
16810 spacer
>>16809

You'd be surprised most of these systems are the right Keith in the right place at the right time. They are just sexed up when sold to the media.

Predicting crimes claims is regularly code for 'someone crunched the existing numbers and reported the rate at which crimes occur in black spots'. How useful that is to you is pretty much the same as actuary calculation they might be meaningful at a large scale, but to the individual they are bullshit.
>> No. 16811 Anonymous
5th December 2018
Wednesday 4:35 pm
16811 spacer
>>16801
Some lad whining about how privilege wasn't real suggested that being ANPR stops were somehow on par with "driving while black" stops.

>>16802
>Using AI doesn't change anyone's legal rights, but it does potentially offer us a fairer and more accurate way to target our resources.
The operative word here is "potentially". I won't rehash discussions I've had in other venues, but AI isn't magic and if you train your model on biased data then its predictions will be similarly biased.
>> No. 16812 Anonymous
5th December 2018
Wednesday 4:44 pm
16812 spacer
>>16807

>There are always surprises in the data, you just need a competent statistician to start finding them. If you hire the right people and have collected reasonably clean data, you can get actionable insights in a matter of days with a hugely positive RoI.

There are a lot of "if"s involved right there that can render even the most well-intended technology useless. I'm still not convinced that turning your prevention and social work duties over to a machine is going to produce the better results, both in terms of staff cost and situation assessment.

It appears you have never spent much time programming computers. AI may be all the rage in the computing world right now, but whether you're programming an Arduino to blink an LED or designing an AI network containing billions of lines of code, there is an infinite number of ways you can implement pretty shit code. And the more complex your code, the easier it is for bits of shit code to be overlooked, and remain undetected for a long time, all the while producing wrong results. So maybe somebody who knocked over his elderly neighbour's wheelie bin will be assessed as 80 percent likely to mug somebody at knifepoint, while somebody who stole the rims off a car in the street at night gets classed as medium-risk.

And even the best AI system is only as good as the data you feed it. Garbage-in still invariably produces garbage-out. More than that, if you ask actual computer scientists, the majority of them will be wary of recommending it for the kind of applications that law-and-order politicians daydream about. Because computer scientists know what can go wrong. It's usually private companies that put their resources towards developing these AI systems, because they know that there are shedloads of money to be made from it by selling it to politicians and governments. That does not mean their product is both ethically and technologically sound.


>A classic example is Scared Straight. In the US, it was quite popular to take naughty kids into prison to show them where they could end up if they kept acting like a prick. Intuitively, this makes a reasonable amount of sense - if you're a delinquent teenager who isn't scared of a bollocking, you might be scared of the possibility of ending up in prison. Nobody bothered to evaluate these programmes for years; when we did, we found that they increased the crime rate, probably because it took away the fear of the unknown.


There was a Beavis and Butthead episode back in the day where they poked fun at Scared Straight. With Beavis and Butthead essentially thinking that prison was so cool a place that it was no deterrent, but actually something to aspire to. Because that's where all the tough guys were who didn't take shit from anybody.

Kowing a fair bit about life in the U.S., I think these programmes are usually the result of scared old white people institutionalising their hatred against the young. You see it with judges operating prison pipelines, with the way youngsters are punished even for ridiculously tiny amounts of drugs for personal use (not to mention that you can die for your country at 17 but can't legally drink a beer in a bar until four years later), you see it with the godawful boot camps that scar the souls of kids for life and do nothing to combat recidivism, and in the way that teenagers are punished for sex with their peers in some states. And in the way successive Republican administrations have cut funding for non-abstinence sex ed which their Democratic predecessors had reenacted back and forth.

Juvenile delinquency in nearly every European country is lower than in the U.S.. And my long standing theory is that that's because parents, but also adults in general in Europe don't see young people as a potential problem that needs to be dealt with, they are not seen as a threat to society, but as valuable future members of society who most of all need positive reinforcement.
>> No. 16814 Anonymous
5th December 2018
Wednesday 5:01 pm
16814 spacer
>>16811

>>16811

>Some lad whining about how privilege wasn't real

Not me, though I remember that thread and don't recall it going the way you seem to think it did, but regardless, try not to identify others on this anonymous imageboard, particularly in the middle of an discussion about presuming people's innocence. Anyway.

>AI isn't magic and if you train your model on biased data then its predictions will be similarly biased.

That sounds like just as strong an argument in favour of my point as it does against it.

How sure are you that the government and civil service that brought us such hits as "most CCTV cameras per person", Tempora, The Snoopers Charter, RIPA/IPA, etc will generate a system that's biased in the way you hope rather than the way I fear?

I can't say I have much faith in a social care system being weighted towards helping people down on their luck, as the current solutions are demonstrably biased in the other direction.
>> No. 16815 Anonymous
5th December 2018
Wednesday 5:22 pm
16815 spacer
>>16814

>That sounds like just as strong an argument in favour of my point as it does against it.

You don't really have to know more about statistics than the average person to be aware that statistical data in and of itself can be massively flawed even with the best and most advanced data gathering methods.

It starts with designing questionnaries to carry out statistical research. That in itself is incredibly difficult, because you will already have a bias of the person designing it, who will weigh some aspects more than others. And then once you have compiled that data, maybe in a database, the question is still how do you analyse that data. What methods do you employ, what do you focus on. And that once again means your survey is wide open to all kinds of errors and biases which will further skew reality. And then on top of it, the risk of poorly written code in an AI system that is supposed to make sense of it all.

Statistical data can therefore never be "neutral" in and of itself. It may sound counterintuitive and you may say that all they do is count the number of reported crimes or something like that, but especially when you bring AI into it which relies on the interconnection of secondary aspects of a crime to judge likelihoods, that's where you can once again shoot yourself in the foot.

In the end, AI is just another buzzword for grandstanding law and order politicians, as I have said. Few of them actually know what they are dealing with. And sure, you can bring crime down in a region by keeping tabs on people. Most trials so far indicate as much. But even if the end result is a lowering of crime incidences, that says nothing about how many false positives you've had, how many innocent citizens you've wrongly suspected, or even if all the people that you have identified as "vulnerable" really need your help at all.

In the end, AI is going to be just as hit and miss as putting ten more police in the streets at night or hiring ten more social workers. And I think you will be better of relying on humans to do the job. Also because it's much easier to justify human error than a malfunctioning, highly specialised AI system that has cost millions to implement.
>> No. 16817 Anonymous
5th December 2018
Wednesday 5:36 pm
16817 spacer
>>16815

I'm not particularly looking at the AI bit, more tha 'using mass surveillance or people's data in order to decide if people are potential troublemakers' bit. Even if it is Keith doing it on spreadsheet, the implications for society at large still exist.
>> No. 16818 Anonymous
5th December 2018
Wednesday 5:53 pm
16818 spacer
>>16814
>That sounds like just as strong an argument in favour of my point as it does against it.
Then one of us has misunderstood the other's point. Let me explain mine further for clarity:

The fact remains that the policing and justice system has its inherent biases, and the records we hold reflect those biases. Therefore, training a predictive model on those existing records will result in predictions that reflect those biases.

This has already been observed with similar endeavours in the US, for instance, a sentencing advisor used in some states has been recommending harsher sentences for black people because black people get harsher sentences.

Countering these biases isn't as simple as excluding data points (there are proxies that can fill them in) or counter-weighting.
>> No. 16819 Anonymous
5th December 2018
Wednesday 6:02 pm
16819 spacer
>>16817

It essentially means a weakening of the presumption of innocence.

Very generally speaking, police and other authorities can't investigate the average innocent person with no hint at all that they might be guilty of a crime, "just because". The presumption of innocence in this case means that authorities are only allowed to become active when there is an indication that something might be up with you.

But if you gather data from pretty much every citizen for the specific purpose of checking if they have done something illegal, then that's already a few steps removed from the idea that generally speaking, an innocent citizen must be left alone.

Also, the effectiveness of this kind of mass data mining is doubtful. Many European countries have had blanket online data gathering and retention programmes in place, quite a few have also abandoned them again after public protest and high court rulings in those respective countries, and when they evaluated the results of all the data gathering, they very typically found that it didn't significantly reduce online crime itself, or even increase the rate of solved crimes, which had been the biggest argument in favour of the measure. Real professional criminals typically knew how to circumvent the data gathering, as they usually do, and the only people who got caught in slightly increasing numbers were a handful of hapless kiddie porn downloaders and filesharers. But even they increasingly began to successfully cover their tracks.

Most national high courts have ruled that all this did not justify putting an entire country's population under suspicion by keeping all their online data.
>> No. 16828 Anonymous
7th December 2018
Friday 11:37 am
16828 spacer

computer_idiot.jpg
168281682816828
>>16818

>in the US, for instance, a sentencing advisor used in some states has been recommending harsher sentences for black people because black people get harsher sentences

And these feedback loops of self evidence are what makes the whole idea so dangerous.

And it already occurs the same way with "heat maps" used by local police in some countries to predict at what time of day which areas of a city could see increased street crime. If you are then unlucky enough, especially as a black person, to be in that area at that time, then you are going to look guilty even if all you really veryfiably did was pass along that street out of sheer coincidence.

And then if you throw AI into the mix, the mere fact that you were in that area at a time when street crime was predicted to be most likely can worsen your crime probability score or whatever you want to call it.
>> No. 16846 Anonymous
8th December 2018
Saturday 3:25 pm
16846 spacer
>>16819
Did they really abandon it though or just put under a better cover?
Power trips are tough to let go.
>> No. 16847 Anonymous
8th December 2018
Saturday 8:30 pm
16847 spacer
>>16846
I imagine some did what they did in this country and basically legislated to overrule the courts. Remember that "emergency bill" to reinstate "necessary" powers? You know, the powers that the courts had ruled they should not have had?
>> No. 16854 Anonymous
8th December 2018
Saturday 11:51 pm
16854 spacer
>>16847

I think Austria axed it completely, as well as the Czech Republic. And I think Frau Merkel's government wanted to go back to data collection after Germany's high court struck it down, banking on loopholes both in EU and German high court rulings, but some ISPs in Germany then sued the government on technicalities. And the end result is that they've got a law that says ISPs must store user data, but the ISPs aren't doing it because they won the lawsuit against the German government.

There was a whole story on that on Zdnet a while ago, pretty fascinating, can't find it now.

Return ]
whiteline

Delete Post []
Password