hckrnws
CBP signs Clearview AI deal to use face recognition for 'tactical targeting'
by cdrnsf
Always easier when you can avoid the law and just buy it off the shelf. It’s fine to do this, we say, because it’s not being done by the government - but if they’re allowed to turn around and buy it we’re much worse off.
That's why it doesn't make sense to ban governments from doing things while still allowing private companies. Either it is illegal to surveil the public for everyone, or the government can always do it indirectly with the same effect.
I don't think the deal described here is even that egregious. It's basically a labeled data scrape. Any entity capable of training these LLMs are able to do this.
The difference is that a government can take personal liberty away from people in the most direct way. A private company can't decide to lock somebody away in prison or send them to death row. (Hopefully anyway.) So we put a higher standard on government.
That said, I do believe there ought to be more restrictions on private use of these technologies.
>A private company can't decide to lock somebody away in prison or send them to death row.
A private company can 100% do this in many ways. They already do this buy putting up and using their technology in minority areas, for example.
It's a distinction. Private companies are partnering with the government to take away personal liberty.
We should ban the government from accessing data gathered by private companies by default, perhaps. I need to mull on it.
The point is that "who gathers it" should be irrelevant.
The government shouldn't be able to buy data that would be unconstitutional or unlawful for them to gather themselves.
On the other hand if a company is just aggregating something benign like weather data, there's no need to bar the government from buying that instead of building it themselves.
> The government shouldn't be able to buy data that would be unconstitutional or unlawful for them to gather themselves.
Now that sounds like a good argument to make in court! How do we do it?
I also personally think there are some private collections we should ban, or put in place limitations on how it can be used, in the interest of general privacy.
That is trickier to decide on and surely there's room to debate.
Prohibiting (or at least restraining) the de jure government but allowing a parallel corporate de facto government to keep growing (in scope, coercion, and influence) is exactly what has happened here. It's naive to think that the de jure government can always continue to "just say no" to the parallel power structure, keeping it under control of the de jure government. Rather, eventually the corporate government gets powerful enough to overrun the traditional mechanics that keep it subservient, no matter how strong those mechanisms may be. Then the corporate government goes to work subverting, devouring, and replacing the traditional government.
If we had wanted to avoid where we are now at - staring down a full-on fascist dystopia - the surveillance industry ("tech") needed to be nipped in the bud 15+ years ago - with a GDPR-equivalent, and strong anti-trust enforcement that prohibited anti-competitive bundling [proprietary] software with [Metcalfe's law] network services. The surveillance industry's power needed to be constrained to remain in line with our assumptions of natural rights and Constitutionally-limited government.
But a lot of people were being paid a lot of money to not think too hard about the implications of what they were building. And so every time the topic came up in our communities, those on the take would shout it down with a litany of rationalizations about why such constraints were not necessary.
Yeah but these companies are operating hand in glove with govt such that there's no discernible difference between the current system and government just doing it themselves. Ban it outright.
I don't disagree with the sentiment. I feel like what we're seeing lately is that private companies are doing the thing that would violate the 4th amendment if government did it, then they sell to the government. The idea that it's not the government itself violating the constitution because they did it through a contractor is pretty absurd.
What specific legal measures you do to enforce this, I don't know, there's some room for debate there.
I don't think there is an expectation of privacy for things you literally post to the public, like social media. Even the government doing the scraping directly I believe would not violate the 4th amendment. The third party doctrine also basically legalizes most types of search through people's "cloud data". To have an expectation of privacy, the data needs to not be shared in the first place.
I don't think tying the hands of the government is a viable solution. The sensitive data needs to not be collected in the first place via technical and social solutions, as well as legislation to impose costs on data collection.
- Teaching that "the cloud is just someone else's computer"
- E2EE cloud
- Some way of sharing things that don't involve pushing them to the whole internet, like Signal's stories.
- GDPR type legislation which allows deleting, opting out, etc
> The third party doctrine also basically legalizes most types of search through people's "cloud data"
This isn't actually true (it varies by type of "cloud data", like content vs metadata, and the circuit you're in), and there are multiple recent carveouts (eg geofence warrants) that when the Supreme Court bothers to look at it again, suggests they don't feel it's as clear as it was decades ago. Congress can also just go ahead and any time make it clear they don't like it (see the Stored Communications Act).
It's also, just to be clear, an invented doctrine, and absolutely not in the constitution like the fourth amendment is. Don't cede the principle just because it has a name. Technical and social solutions are good, but we should not tolerate our government acting as it does.
> I don't think there is an expectation of privacy for things you literally post to the public, like social media
Neither is there an expectation that automation would slurp it up and build a database on you and everyone else. Maybe the HN crowd is one thing, but most normies would probably say it shouldn't be allowed.
> Even the government doing the scraping directly I believe would not violate the 4th amendment.
Every time I see someone make a statement like this I think of the Iraq war era when a Berkeley law professor said torture is legal. Simply saying something that clearly violates the spirit of our rights is ok based on a technicality, I would not call that a moral high ground.
> The sensitive data needs to not be collected in the first place via technical and social solutions,
At this point and points forward I think your comment is much more on the mark.
I think we clearly both agree that mass surveillance is problematic regardless of whether it is done by the government or corporations. With that said
> normies would probably say it shouldn't be allowed
Despite knowing about this, most continue supporting the various companies doing exactly that, like Facebook and Google.
> Neither is there an expectation [...]
Expectation is not law, and it cuts both ways. The authors of the 4th and 5th amendments likely did not anticipate the existence of encryption - in their view, the flip side of the 4th amendment is that with a warrant, the government could search anything except your mind, which can't store that much information. We now get to enjoy an almost absolute right to privacy due to the letter of the law. You might feel that we should have that right anyway, but many other governments with a more recent/flexible constitution do not guarantee that, and in fact require key disclosure.
> > Neither is there an expectation [...]
> Expectation is not law.
It is in this case.
Expectation of privacy is a legal test based literally on on what "normies would probably say". If, as a society, we're moving more and more of our private effects to the cloud, there is a point where there's an expectation of privacy from the government there, regardless of the shadiness of the company we trusted for it, and regardless of what's convenient for the government.
https://www.law.cornell.edu/wex/expectation_of_privacy
Carpenter v. United States is a great example of this, where a thing once thought as obviously falling under the third party doctrine (cell tower location information) was put definitively within protection by the fourth amendment because of ongoing changes in how society used and considered cell phones.
And I forgot about this but just saw it referenced in the wikipedia article: it's notable that Gorsuch's dissent on the case argued for dropping the third party doctrine completely:
> There is another way. From the founding until the 1960s, the right to assert a Fourth Amendment claim didn’t depend on your ability to appeal to a judge’s personal sensibilities about the “reasonableness” of your expectations or privacy. It was tied to the law. The Fourth Amendment protects “the right of the people to be secure in their persons, houses, papers and effects, against unreasonable searches and seizures.” True to those words and their original understanding, the traditional approach asked if a house, paper or effect was yours under law. No more was needed to trigger the Fourth Amendment....
> Under this more traditional approach, Fourth Amendment protections for your papers and effects do not automatically disappear just because you share them with third parties.
Thanks for the legal clarification. I don't disagree that the third part doctrine is rather overbroad.
I would still prefer legislation and tech that actually reduce data collection though. Fifth amendment protections are much stronger, and cannot be overcome by a warrant, whereas third parties can be subject to subpoena.
The problem comes from what you post under something that's not your name.
Personally, I'm thinking perhaps the answer is the other way around:
Any company that collects data apart from what you directly provide them must make a best-effort to end you an e-mail every year with the data in a standardized format or links to the data. (Doesn't need to be burdensome--documents go behind a UUID with a non-readable directory. You either know the URL or you don't.)
You have data you didn't disclose, pay $1 per item + costs. (If you have useful amounts of data that per item will add up really fast.)
But that is his point with "or the government can always do it indirectly with the same effect"
The company doesn't have that power, but the government can compel companies to provide them with the same data as long as it exists, and then abuse it in the same way as if they had collected it themselves.
A private company can put you on a list and you'll never have a home again.
The separation between private and the government is purely theatrics - a mere administrative shell.
I really don't understand why people treat it with such sacrosanct reverence.
It reminds me of a cup and ball street scam. Opportunistic people move things around and there's a choir of true believers who think there's some sacred principles of separation to uphold as they defend the ornamental labels as if they're some divine decree.
I mean come on. Know when you're getting played.
In some cases yes, especially when it comes to surveillance, the distinction doesn't feel like very much. When the government hires a contractor specifically because they break the spirit of the 4th amendment, it's hard to argue that it's not the government breaking the law.
A private company can rat you out the government in the same way that a private citizen can report you to the police. I don't see a reasonable way to change this.
The government should be held to higher standards in terms of being able to appeal its actions, fairness, evidentiary standards. But the government shouldn't necessarily be prevented from acquiring and using information (which is otherwise legally obtained).
I don't disagree that we should perhaps more restrictions on private processing of data though -- GDPR style legislation that imposes a cost on data collection is probably sufficient.
People die all the time, because of decisions made by private companies.
Uh, the government can pay the private company for the data so they can lock those people up.
Cops are legally forbidden from surveilling everyone at all times using machines. Explicitly so. Yet, if a company starts up and surveils everyone at all times, and their only customer is Cops, it's all Okay somehow. The cops don't even need a warrant anymore.
What's worse, is that third party doctrine kills your rights worse than direct police surveillance.
Imagine if you will, back in the day of film cameras: The company developing your film will tell the police if you give them literal child porn but otherwise they don't. But imagine if they kept a copy of every picture you ever took, just stuffed it into a room in the back, and your receipt included a TOS about you giving them a license to own a copy "for necessary processing". Now, a year after you stopped using film cameras, the cops ask the company for your photos.
The company hands it over. You don't get to say no. The cops don't need a warrant, even though they 100% need a warrant to walk into your home and grab your stash of photos.
Why is this at all okay? How did the supreme court not recognize how outright stupid this is?
We made an explicit rule for video rental stores to not be able to do this! Congress at one time recognized the stupidity and illegal nature of this! Except they only did that because a politician's video rental history was published during his attempt at confirmation.
That law is direct and clear precedent that service providers should not be able to give your data to the cops without your consent, but this is America so precedent is only allowed to help businesses and cops.
> The difference is that a government can take personal liberty away from people in the most direct way. A private company can't decide to lock somebody away in prison or send them to death row. (Hopefully anyway.) So we put a higher standard on government.
We put higher standards on the government because companies have the biggest propaganda coffers.
It’s not some rational principle. Money goes in, beliefs come out.
What would such a ban look like?
A private company can surely link its own cameras and data to create a private use database of undesirables. I’m certain that Walmart and friends do exactly this already. It’s the large scale version of the Polaroids behind the counter.
It can be banned explicitly as a regulation on surveillance cameras. Like:
- The footage must be secured / only stored locally, and can only be used in legal proceedings or liability, and can be stored for maximum 1 (or a different number) year
- It cannot be sold or used to train AI or processed for marketing or other purposes without consent of all involved (in practice impossible).
- And no people cannot "agree" to things by just entering the premises or view
- It is illegal to make decisions based on illegally obtained (as per above) analytics, like refusing entry/membership/service, with a private right of action
wouldnt "Any person found to have implemented a system which violates the rights of people in xyz way will be punished with imrisonment" work ?
In what way? A business can refuse to service any individual as long as it’s not a direct violation of things like civil rights laws.
It's possible to understand these things as "civil rights", unless you have a very narrow and likely pejorative understanding of the term.
The whole point of a regulation is to ban something they are currently allowed to do. There was a time before the civil rights laws where you can discriminate by race, you know.
Or that the government isn't allowed to purchase anything they'd normally need a warrant for?
So everyone and their mom and all the foreign governments can buy the data, but not your own government? Do you really think this is a sustainable arrangement?
If the choice is between my own government and a foreign government spying on me, I'd rather the foreigners.
[flagged]
I would much rather have a democratically elected and constitutionally constrained government than private enterprise with limitless power. It would also be helpful if the “government is bad” people would stop electing the people who seek to sabotage the government.
Facial recognition is not a legitimate private enterprise. It is a complete failure of legislation that it is allowed to exist.
Apple Face ID is not a good or legitimate feature? You just upset hundreds of millions of people
Most people miss Touch ID on the iPhone.
So biometrtics are OK as long as it's not your face? Trying to derive the first principle here.
When it's allegedly stored in the Secure Enclave and the data unable to leave the device, it is obviously not the same interaction. I personally would be fine with a law making that distinction with the on-device recognition as legal.
Just like when Verizon sold its customers' precise location history to data brokers who then sold it to law enforcement agencies.[^1] Laundered.
[^1]: https://arstechnica.com/tech-policy/2025/09/court-rejects-ve...
That's not how the law works in the US. The government cannot have a third party take action on its behalf to do something that would be illegal for the government to do itself. This is why the Biden administration had a restraining order filed against it, on account of them pressuring social media companies to ban content it didn't like. This violated the First Amendment, despite the fact that it was a third party that was doing the actual banning at the behest of the government.
The government could legally create its own facial recognition technology if it wanted to. They're not avoiding the law, facial recognition isn't illegal.
That's pretty much how KYC works. The government can't just willy nilly demand papers of everyone going into the bank to open up an account due to the 4th amendment. So they just make the bank do it so it is a "private" act, and then for instance IRS is authorized to do warrantless seizure on the accounts which are now tied to names that were forced to be revealed under KYC laws.
The government doesn't need a warrant to access bank records, as per the US's banking laws. They just need an administrative subpoena, which doesn't have to be signed off by a judge.
This is not and example of the government sidestepping laws through a third party. You just don't like the existing laws, and would prefer to make certain things illegal that are presently legal.
There wouldn't be any identity linked for an anonymous bank account to 'access', were it not for the warrantless search of your papers required under KYC but done via private entity (sidestepping 4th amendment) to open an account. That part is done without even a subpoena.
That is, the US banking laws force private actors, under color of law, to systematically inspect the papers of those opening an account, which conveniently sidesteps the 4th amendment implication of the government searching the papers themselves at everyone opening an account at the bank. And then allows the government to act on the information of that forced search, even without a warrant.
---------- re: below due to throttling -------
I'm referring to this:
>The government cannot have a third party take action on its behalf to do something that would be illegal for the government to do itself.
It is illegal for the government to violate the 4th amendment, whether or not a 'law' beyond what is written in the constitution is present.
Clearly the government would love to just take all your information directly when you open an account, as that would be even better for them, but due to the 4th amendment they can't do that. But just asking or without a warrant requiring the bank to act on it or reveal it is almost as easy, so they just sidestep that by just requiring via the law the bank to search your papers instead. It's effectively a government imposed search but carried out by a 3rd party.
--------------------
>This is just factually wrong. The Bank Secrecy Act specifically requires that banks to provide this info. The 4th amendment does not prohibit this. If a bank refused to provide this required information, the government would go in and get that information directly.
>Again, no law is being avoided. You just don't like the
This is not 'just factually wrong.' The bank is doing the search instead of the government. A blanket search of everyone, even without a subpeona, even without an individualized notice, even without any sort of event that would require reporting to the government under the BSA, even then they still are required to search the information even in the instances that it doesn't end up being required to be transmitted to the government. You're saying the portion of data the government collects might be 4A compliant, but that doesn't mean the private actor being forced to collect information that doesn't even get reported is 4A compliant if the government did it. You're just saying the subset of required KYC collected information that ends up transmitted to the government was 4A compliant, which isn't sufficient to establish the government could have collected all the information to begin with under the 4A as they have required the bank to do.
>the government would go in and get that information directly
A blanket sweep of everyone's information willy nilly by the government is not 4A compliant, that's why they've had the bank do it on their behalf.
> Clearly the government would love to just take all your information directly when you open an account, as that would be even better for them, but due to the 4th amendment they can't do that
This is just factually wrong. The Bank Secrecy Act specifically requires that banks to provide this info. The 4th amendment does not prohibit this. If a bank refused to provide this required information, the government would go in and get that information directly.
Again, no law is being avoided. You just don't like the law.
> A blanket sweep of everyone's information willy nilly by the government is not 4A compliant, that's why they've had the bank do it on their behalf.
Wrong again. If retrieving this info was a violation of the Fourth Amendment, then banks could just say "no" when the government asks them for customer data data.
Groups did sue following the passage of the Bank Secrecy Act, and argued that it violated the Fourth and Fifth amendments. But they lost, and the Supreme Court determined that it did not violate the constitution.
For the third time: no law is being "avoided", you just don't like the law.
Right, but the point is, no law is being avoided. The comment I responded to wrote:
> Always easier when you can avoid the law and just buy it off the shelf. (Emphasis mine)
No law is being avoided, neither in your banking example nor in the situation with Clearview. To be sure, people can have whatever opinion on the law that they want. But I do want to make it clear the the government is not "avoiding" any law here.
> No law is being avoided
Following the conversation, this reads as too strong a statement. The Constitution is law, and it (the fourth amendment) is being avoided via the Bank Secrecy Act. The Constitution supersedes any conflicting Acts of Congress.
Groups did sue after the Bank Secrecy Act, and the cases went all the way to the Supreme Court. The Supreme Court determined that it did not violate the constitution: https://en.wikipedia.org/wiki/Bank_Secrecy_Act
> Shortly after passage, several groups attempted to have the courts rule the law unconstitutional, claiming it violated both Fourth Amendment rights against unwarranted search and seizure, and Fifth Amendment rights of due process. Several cases were combined before the Supreme Court in California Bankers Assn. v. Shultz, 416 U.S. 21 (1974), which ruled that the Act did not violate the Constitution
(1) KYC requirements have changed significantly since 1974, so as applied findings in 1974 won't apply to what we're referring to today.
(2) SCOTUS wrote the bank customers (rather than the bankers in the suit) themselves likely didn't have standing in that suite, which meant the decision was based more around whether the banks had their rights violated. I am not arguing that the bank had its rights violated and even conceded some subset of information transmitted might be 4A compliant but rather the wholesale KYC regime (largely now based on post 9/11 acts) isn't KYC compliant and is an insult to the customers rather than to the bank. I am arguing the clients themselves are having their rights violated, and it appears to consider that to the full extent you need a case where the clients had standing unlike what the justices thought to be the case in 1974.
(3) SCOTUS has overturned their own rulings on constitutionality before, without any material change in the relevant portions of the constitution in the interim.
(4) Failing all the above, the founders have also noted our government acts imperfectly, and even noted as a last resort it can be replaced when this becomes unworkably pervasive. The fact a bunch of guys in wigs interpret something some way that says mass government imposed willy-nilly search of your papers is allowed by the 4A, only provides a much stronger binding for the rest of the government to follow it, not create an objective truth.
At this point you've stopped attempting to argue that banking's information collection requirements is "avoiding" the law, and instead have pivoted to speculation about whether the Supreme Court might reinterpret the Constitution such that it prohibits this kind of data collection mandates.
But until such a ruling from the Supreme Court is made, the previous ruling is the law of the land. So as I said: it's not the government avoiding a law by acting through a third party, you just don't like the law. And it sounds like you don't disagree.
I'm arguing the statutory law as applied today is avoiding the constitution, which is the supreme law. That is, the government is ignoring the supreme law by acting through a 3rd party. This doesn't "pivot" from my original position. Claiming so just allows you to "pivot" around my 4 points, and "avoid" addressing the inconvenience of their existence with a thought terminating quip that it just boils down to I "don't like the law".
You're "speculating" that a SCOTUS decision from 1974 applies to KYC today which has changed significantly since the passing of the BSA to a much more expanded search of your papers, updated significantly post 9/11. Moreover, the ruling you cite claims they didn't even think the clients themselves had standing in that suite, which reduces the strength of your argument since my assertion was that the clients were having their rights violated and your cited case largely contemplated whether the bankers had their rights violated.
> This is why the Biden administration had a restraining order filed against it, on account of them pressuring social media companies to ban content it didn't like. This violated the First Amendment
It's very strange of you to leave out that the extremely right-wing 5th Circuit's opinion was overturned 6-3 by SCOTUS because "pressuring social media companies to ban content" was a complete fabrication the plaintiffs failed to support whatsoever.
Regardless of the subsequent lifting of the order, it still illustrates that the government cannot make private parties carry out illegal acts on its behalf. If anything, the fact that the circuit's decision was later overturned shows that the courts are erring on the side of restraining the government when they try to make third parties carry out actions that the government cannot do legally.
It's simply bizarre to claim that a blatantly partisan circuit court issuing capricious restrictions on their political opposition and having them vacated by SCOTUS is evidence of "the courts" erring against "the government" generally. The decision was overturned because the plaintiffs' case was a baseless fiction that the Biden administration was ever even implicitly compelling those third parties to do anything. The plaintiffs' standing was so plainly nonexistent that even 3/6 of the majority from Kennedy v. Bremerton School District couldn't pretend there was a case. The only example that case serves is of the most Republican-allied circuit court consistently issuing garbage opinions to empower Republican administrations and reconsolidate partisan policymaking to itself during Democratic administrations.
Regardless of how wrong the 5th circuit decision was in this case, the point was just to highlight that the law doesn't allow the government to circumvent restrictions by going through a third party. Clearly this particular case triggers strong feelings in you, but this is really not at all pertinent to the main point I'm making.
A point hasn't actually been made when the sole example, presented as fact, was a partisan lie that collapsed under extremely basic scrutiny. The decision had nothing to do with this strange notion that the courts err on the side of restraining the government and everything to do with the fact the 5th Circuit [yet again] presumed an alternate reality in order to achieve a Republican policy victory.
This is why we should shun the people that build this stuff. If you take a paycheck to enable fascism, you're a bad person and should be unwelcome in polite society.
"Tactical Targeting" - you just know someone's PowerPoint presentation used the word "synergy" in it too.
How long before the bring the price down and local PD's start using it too?
Not sure if you're joking but Clearview's primary customers are local or metro police departments.
local laws forbidding facial recognition tech have never been wiser
Comment was deleted :(
225k USD per year sells us cheaply!
Completely unsuprising as Clearview has been turning Orwell over in his grave for years.
And this right here is why Clearview (and others) should have been torn apart back when they first appeared on stage.
I 'member people who warned about something like this having the potential to be abused for/by the government, we were ridiculed at best, and look where we are now, a couple of years later.
"This cannot happen here" should be classified as a logical fallacy.
As stated in many of the comments in my code where some else branch claims this shouldn't be happening
I keep reading this as “CBS signs…” and can’t help thinking about that uncomfortable possible future moment.
There are certain people who believe that average citizens can be held responsible for the actions of their government, to the point that they are valid military targets.
Well, if that's true then employees of the companies that build the tools for all this to happen can also be held responsible, no?
I'm actually an optimist and believe there will come a time whena whole lot of people will deny ever working for Palantir, for Clearview on this and so on.
What you, as a software engineer, help build has an impact on the world. These things couldn't exist if people didn't create and maintain them. I really hope people who work at these companies consider what they're helping to accomplish.
> average citizens can be held responsible for the actions of their government, to the point that they are valid military targets.
What do you mean by this? If a government conscripts "average citizens" into its military then they become valid military targets, sure.
I'm not why you think this implies that developers working for Palantir or Clearview would become military targets. Palantir builds software for the military. But the people actually using that software are military personnel, not Palantir employees.
You're trying to find claims that aren't there, they are explicitly saying that "certain people" (which may or may not include the original poster) think that deliberately killing civilians is fine if their government is bad enough. They then go on to rhetorically question if those same "certain people" would find terrorist killings of tech workers who work at companies they don't like justified because they help the US government, even if it's in a purely civilian context.
I never worked at a company that could broadly be considered unethical, I don't think. But it was always a bit disheartening how many little obviously unethical decisions (e.g., advertised monthly plans with a small print "annual contract" and cancellation fee) almost every other employee would just go along with implementing, no pushback whatsoever. I don't know what it is, but your average employee seemingly sees themselves as wholly separate from the work they're paid to do.
I have friends who are otherwise extremely progressive people, who I think are genuinely good people, who worked for Palantir for many years. The cognitive dissonance they must've dealt with...
I used to work at a company where we did hosting/ maintenance/ etc for large-ish content sites.
At some point a project came across my desk where a hard-right propaganda site for college students came across my desk and I needed to migrate it.
Folks might quibble about the reality of what that site was doing but that's how I (as a person with an MA in rhetoric) understood the site, so humor me on my assessment of that site. It was a pretty regular site on the Drudge report, though, so that might help with context.
It was a very popular site, with multiple millions of unique visitors every month, and was a lot of easy cash for the business.
At that point in my career, I felt that not doing that work would be a rather "privileged" pose to strike- it would have negative impacts on my coworkers and the very small business in general, while I would just be "uncomfortable" either way.
At some point I was asked to build out a "tracker" for things like "confederate state removals, etc", IIRC sometime around the "Unite the Right" events.
I turned the work down, even though it pissed off my boss and forced a different co-worker to do the work.
That situation was what helped me understand that the immoral and "privileged" position was to do that kind of work, which wouldn't quickly and directly harm me but was likely to harm other people at some point.
However, what I also realized was that doing that work is probably harmful to me, too, as a queer leftist who now wishes I didn't feel like I need to own guns.
Almost everyone in that small business was queer or brown or both. At some point after (I am vaguely recalling) an 8-chan related shooter, the boss of the business stopped doing updates or work on the site.
All that is to say, I used to feel like "speaking up when I didn't want to do something unethical" was a privileged thing to do but I have come to realize that the inverse is true.
> I don't know what it is, but your average employee seemingly sees themselves as wholly separate from the work they're paid to do.
Hannah Arendt coined the term “the banality of evil”. Many people think they are just following orders without reflecting on their actions.
> I have friends who are otherwise extremely progressive people, who I think are genuinely good people, who worked for Palantir for many years. The cognitive dissonance they must've dealt with...
There's really nothing different about it than people working for Meta, AWS or Microsoft, and there are likely a dozen of those among us in this thread alone (hi!). Especially pre-Trump. Without the latter two companies gladly committing to juicy enterprise contracts with Palantir (continuing to this day), they would barely exist. Zuckerberg has caused magnitudes more death and destruction in the world than Karp could even dream of. Not for the latter's lack of trying, of course. And with similar amounts of empathy. Bezos, Musk, Zuckerberg, Thiel's combined empathy amounts to zero. None of them have more than any of the others.
To be fair, I think at least at Microsoft, Google and indeed Palantir a lot of people have had large degrees of separation to the despicable stuff the companies are responsible for. Working on say, Xbox (Microsoft), or Gmail (Google), or optimizing Airbus manufacturing (Palantir), one can see how it's quite easy to defend at a surface level.
In that sense I consider Meta the worst because Facebook/Instagram are effectively the entire business, with a little side of WhatsApp. Almost everyone is working directly on those two products, or is maybe one degree separated.
I'm sure this won't be a popular opinion on here, but it may help you give a different point of view about your friends in a way. You should talk with them about it. Pre-MAGA, you'll likely find that ironically the place they worked at was internally more progressive than Meta or Microsoft. Did your friends quit voluntarily or were they booted?
This is an interesting article [0].
This comment probably won't be very popular here, but I do invite those who instinctively reach for the downvote button to have a calm think and maybe reply before they do so.
[0] https://www.cnbc.com/2025/10/30/palantir-trump-karp-politics...
>There are certain people who believe that average citizens can be held responsible for the actions of their government, to the point that they are valid military targets.
Yeah we typically call those people terrorists or war criminals.
Or heroes, if they win.
No, I will continue to call them terrorists or war criminals. You can feel free to lick their boots though.
I'm sure the anti-vax crowd who were foaming at the mouthes over the vaccine containing tracking chips will explain why this is needed and why its not a big deal.
"Tactical Targetting": Whitewash stochastic terrorism to attack brown people before midterms.
We need a Constitutional amendment that guarantees a complete right to anonymity at every level: financial, vehicular, travel, etc. This means the government must not take any steps to identify a person or link databases identifying people until there has been a documented crime where the person is a suspect.
Only if an anonymous person or their property is caught in a criminal act may the respective identity be investigated. This should be sufficient to ensure justice. Moreover, the evidence corresponding to the criminal act must be subject to a post-hoc judicial review for the justifiability of the conducted investigation.
Unfortunately for us, the day we stopped updating the Constitution is the day it all started going downhill.
That will be wildly unpopular with both parties and most importantly their constituents. I doubt even the libertarian party should they get the president, house and senate could pull it off
Note that the Amendment would apply only to the government, not to private interests. Even so, i could be unpopular among advertisers and data resellers, e.g. Clearview, who sell to the government. I guess these are what qualify as constituents these days. The people themselves have long been forgotten as being constituents.
What do you mean "even" the libertarian party? Libertarians would remove whatever existing laws there are around facial recognition so that companies are free to do whatever they like with the data.
[dead]
Maybe. Anonymity is where bad actors play. Better to have better disclosure and de-anonymization in some cases. If some live in fear (e.g. of cartels), go after the cartels harder than they go after you.
> Maybe. Anonymity is where bad actors play.
The problem is when the government changes the definition of 'bad actor'.
Anonymity is where little bad actors play. The big ones don't need to be anonymous because their nefariousness is legal, or they don't get prosecuted. See: waves vaguely in the direction of the US government.
That said, the recent waves vaguely in the direction of the US government has demonstrated the weakness of legal restrictions on the government. It's good to have something you can point to when they violate it, but it's too easily ignored. There's no substitute for good governance.
> Anonymity is where bad actors play
That is a myth spread by control freaks and power seekers. Yes, bad actors prefer anonymity, but the quoted statement is intended to mislead and deceive because good actors can also prefer strong anonymity. These good actors probably even outnumber bad ones by 10:1. To turn it around, deanonymization is where the bad actors play.
Also, anonymity can be nuanced. For example, vehicles can still have license plates, but the government would be banned from tracking them in any way until a crime has been committed by a vehicle.
Not sure why you say that statement was intended to deceive?
Both good and bad actors benefit in the current system from anonymity. If bad actors had their identities revealed, they'd have a lot harder time being a bad actor. Good actors need anonymity because of those bad actors.
Don't we already have facial recognition technology that isn't based on AI? why is throwing AI into the mix suddenly a reasonable product? Liability wavers?
I think the facial rec systems you're thinking of will recognize faces, but not ID them. They need you to label a face, and then it recognizes that face with a name from there on. Clearview is different in that you can provide it an unknown face and it returns a name. Whether it's just some ML based AI vs an LLM, it's still under the AI umbrella technically.
Uh no? Facial recognition to names has been the bread and butter of facial recognition since the beginning. It’s literally the point.
There are plenty of facial rec systems. Thinking of systems like in iOS Photos, or any of the other similar photo library systems. I think pretty much everyone would be freaked out if they started IDing people in your local libraries.
Facebook was doing that 10 years ago
Yes, and that would fall under "any of the other similar photo" category
huh?
unsure what you mean by starting IDing? Majority business in US does it already, all banks use facial recognition to know who comes through their door (friend who works in IT at Bank of America told me they implemented it cross all Florida branches sometime in 2009), most large chain gas stations as well, so does car rentals, most hotels, etc. I was recently booted out of Mazda Dealership in Florida because 11 years ago in Georgia I sued Toyota Dealership for a lemon sell, and now they both under same ownership and my name came up on "no business" alert when I entered their offices.
Note that there is no difference in the model or in the training. The only thing needed to convert ios photos into one that IDs people is access to a database mapping name to image. The IDing part is done after the "AI" part, it's just a dot product.
Huh? What relevance does that have with the discussion?
After the literal first one which just measured distance between nose and mouth and stuff like that from the 1960s, everything else has been based on AI.
If my memory serves me, we had a PCA and LDA based one in the 90s and then the 2000s we had a lot of hand-woven adaboosts and (non AI)SIFTs. This is where 3D sensors proved useful, and is the basis for all scifi potrayals of facial recognition(a surface depth map drawn on the face).
In the 2010s, when deep learning became feasible, facial recognition as well as all other AI started using an end to end neural network. This is what is used to this day. It is the first iteration pretty much to work flawlessly regardless of lighting, angle and what not. [1]
Note about the terms AI, ML, Signal processing:
In any given era:
- whatever data-fitting/function approximation method is the latest one is typically called AI.
- the previous generation one is called ML
- the really old now boring ones are called signal processing
Sometimes the calling-it-ML stage is skipped.
[1] All data fitting methods are only as good as the data. Most of these were trained on caucasian people initially so many of them were not as good for other people. These days the ones deployed by Google photos and stuff of course works for other races as well, but many models don't.
This is the frontline fascist unit in the newly fascist US government. Fear anything they get funding to do. They have one aim and none of it is for the betterment of the people.
Wear a face mask in public. Got it.
I think anything short of fully obscuring your face (a-la ICE-agent/stormtrooper) will be merely a mitigation and not 100% successful. I recall articles talking about face recognition being used "successfully" on people wearing surgical masks in China. In the US they ask you to remove face masks in places where face recognition is used (at the border, TSA checkpoints), but would be unsurprised if that isn't strictly needed in most cases (but asking people to remove it preemptively ends up being faster for throughput).
Probably room to add little cheek pads or other shape-shifters under the mask.
You have to change how you walk and sounds as well
99.9% of people walk around with an electronic device that identifies them. If a particular person doesn’t, it should be trivial to filter out all the people that it couldn’t have been, leaving only a small list of possible people.
Aren't we back to where this is illegal again, unless you're an ICE agent.
"Hey man, doctor's orders. Gotta wear it to get allergy relief. And no, can't ask about it... HIPAA stuff."
It is not a good idea to lie to an employee of the USA.
You and I pay that employee's salary.
Who said it's a lie? It's also not a good idea to operate a police state.
Sadly, I'm sure that will go over "not well" with ICE agents who will happily assault you for carrying a phone...
I disagree with the shooting too, but this is such a massive oversimplification of the event.
Alright, I'll rephrase - "ICE agents have shown a bias towards escalation than de-escalation in conflict situations, be it pepper spray, assault, detention, or worse. I think that trying to get into a shouting match with them about HIPAA violations on removing your face mask are not likely to result in "okay, carry on, as you were"."
"I'll show you mine if you show me yours"
Your gait I think is more useful than your face is anyways and my understanding is it's my difficult to disguise. So you'll need a wheel chair/scooter and a mask in public.
Putting a rock in your shoe instantly changes your gait signature.
Thank you Corey Doctorow and "Little Brother". That book was prescient. And free.
Frankly, I never imagined when I read that decades ago, that it could be underselling the horror.
If you have not yet heard of it, look into gait recognition. Any battle for anonymity is a losing one, it appears.
In that case, guess it's time to start thinking of ways to make it unappealing to act upon the intelligence they've gathered upon us.
[flagged]
Wired still seems to write some good pieces.
are you using a vpn or something like that that might look like “you” have read wired articles?
I subscribe to keep the reporting going. Journalism costs money.
Most Americans don’t pay for news and don’t think they need to - https://news.ycombinator.com/item?id=46982633 - February 2026
(ProPublica, 404media, APM Marketplace, Associated Press, Vox, Block Club Chicago, Climate Town, Tampa Bay Times, etc get my journalism dollars as well)
Are we the same person?
I subbed to Wired last year during a sale and uh... I was never given a premium account linked to my email and support would never answer me. I signed up for the print edition as well and never received any of those. I was getting their newsletter though and that was new. Then I emailed to cancel when I got a billing notification to my email and they were able to cancel it just fine so apparently I did have an account? And then like two weeks ago I received the latest print edition.
Truly have no idea what that was about, but anyway glad to see someone else out here supporting almost all the same news orgs as me (404media is amazing!)
>Journalism costs money.
They've sold out for years already, maybe decades. Why fund them now when the corruption is out in the open?
AP is really one of the few places I'd even consider donating to at this point.
[flagged]
[flagged]
None of your links allege that Hoan Ton-That or Richard Schwartz is a white supremacist. What the Huffington Post piece does allege, is that for 3 weeks Smartcheckr (a different LLC that would later have its assets transferred to Clearview AI) hired one Douglass Mackey. Douglass Mackey had an online alias "Richard Vaughn" that was used to post white supremacist content. Ton-That states that he was unaware that Mackey was the real person behind Richard Vaughn.
There's a vast gulf between "Clearview AI was founeded by white supremacists" and "Smartcheckr, which later merged with Clearview AI, employed for 3 weeks someone who posted white supremacist content under a pseudonym, unbeknownst to the Clearview AI founders".
In fact, neither the Buzzfeed article nor the NYTimes piece accuse anyone of white supremacy.
SmartCheckr was founded/owned by Hoan Ton-That and Richard Schwartz. So they transferred the assets of a company they already founded/owned to another company that they founded/owned: "Clearview AI was founded in 2017 by Hoan Ton-That and Richard Schwartz after transferring the assets of another company, SmartCheckr, which the pair originally founded in 2017 alongside Charles C. Johnson" [0].
Other notable white supremacists with material ties in the article:
Chuck Johnson [1] collaborated with Ton-That and "in contact about scraping social media platforms for the facial recognition business." Ran a white supremacist site (GotNews) and white supremacist crowd funding sites.
Douglass Mackey [2] a white supremacist who consulted for the company.
Tyler Bass [3] an employee and member of multiple white supremacist groups and Unite the Right attendee.
Marko Jukic [4], employee and syndicated author in a publication by white supremacist Richard Spencer.
The article also goes into the much larger ecosystem of AI and facial recognition tech and its ties to white supremacists and the far-right. So there are not just direct ties to Clearview AI itself, but a network of surveillance companies who are ideologically and financially tied to the founders and associates.
[0] https://en.wikipedia.org/wiki/Clearview_AI
[1] https://en.wikipedia.org/wiki/Charles_C._Johnson
[2] https://en.wikipedia.org/wiki/Douglass_Mackey
[3] https://gizmodo.com/creepy-face-recognition-firm-clearview-a...
[4] https://www.motherjones.com/politics/2025/04/clearview-ai-im...
Again, if you want to allege that these other people are white supremacists go right ahead.
But you wrote that "Clearview AI was founded by white supremacists". Even after your new set of links, this remains unsubstantiated. None of your links allege that the Clearview founders are white supremacists, they make an attempt at guilt by association.
You're right, my apologies. I've edited the original from "Clearview AI was founded by white supremacists" to "Clearview AI was founded by people with close ties to white supremacy and even employed some." Thanks for the correction!
Did you mean Vietnamese supremacist?
Nope. I meant white supremacist [1]. Notice I didn't say smart white supremacist.
[1] https://img.huffingtonpost.com/asset/5e8cc7922300005600169bd...
They hired weev? Can you point out to where he founded or worked for the company?
[flagged]
Comment was deleted :(
[flagged]
This is exactly, precisely the opposite of what the impact will be.
For example:
- every technology has false positives. False positives here will mean 4th amendment violations and will add an undue burden on people who share physical characteristics with those in the training data. (This is the updated "fits the description."
- this technology will predictably be used to enable dragnets in particular areas. Those areas will not necessarily be chosen on any rational basis.
- this is all predictable because we have watched the War on Drugs for 3 generations. We have all seen how it was a tactical militaristic problem in cities and became a health concern/addiction issues problem when enforced in rural areas. There is approximately zero chance this technology becomes the first use of law enforcement that applies laws evenly.
Not only is this incredibly naive, it misses that whole "consent of the governed" thing. I don't want AI involved in policing. They are bad enough and have so little accountability without "computer says so" to fall back on, That's all AI will do, make a bad situation worse.
The targets for the AI are still set by humans, the data the AI was trained on is still created by humans. Involving a computer in the system doesn't magically make it less biased.
That is true for now, but eventually it should be possible for it to be more autonomous without needing humans to set its target.
That's just what we need, an AI that was trained on biased data and then empowered to do whatever it wants autonomously. It's a pity we can't look to any examples of human intelligences that have been trained on biased data and then empowered to do whatever they want autonomously.
Ah yes, we'll call the system Skynet.
Same could be said about the computer systems that have been developed in the last 20 years. But that hasn’t happened…
are you sure it won't enabled targeted enforcement for people law enforcement finds irritating, more than evenly applied law? It's still people setting the priorities and exercising discretion about charging.
It should be easier to audit since you would have a list of who broke the law, but action had not been taken yet.
do you think the records of the vast number of police departments and agencies would be combinable with the separate court records, as well as the facial recognition access data source (if it exists?)
I think that is pretty unlikely
I wonder how many laws and sentencing guidelines etc are formulated with an implicit assumption that most of the time, people aren't caught.
In my estimation all of the criminal ones and at least half of the civil ones.
I think it will reveal unfair laws and as a society we will have to rebalance things that had such an assumption in place.
We can't even make hand driers that don't discriminate on the basis of race. You think making complex law enforcement decisions based on data is going to be easier?
None of the destruction of your rights has lead to improvement in clearance rates.
Crimes aren't solved, despite having a literal panopticon. This view is just false.
Cops are choosing to not do their job. Giving them free access to all private information hasn't fixed that.
Then cops should be taken out of the core law enforcement agentic loop. There could be a new role of people who the AI dispatches instead to do law enforcement work in the real world.
I think you fundamentally misunderstand what the role of the police is. They protect property, the owning class, and the status quo. Laws are just a tool for them to do that. Equal justice for all is not a goal for them, and AI will not provide more of it.
The thing is if you have a truly fair AI you start catching the Trumps and Musks of this world in their little underaged trists. How long do you think that system would actually stay running for?
The thing you're missing is our system is working exactly like it's supposed to for rich people.
Meanwhile all AI face recognition software works poorely on non-caucasians.
With this administration, I think that is a feature not a bug
Yeah its not like the "AI" manufacturers have their own biases that are reflected in the model.
For example, Deepseek won't give you critical information about the communist party and Grok won't criticise Elon Musk
Why do you write so many low-effort, disingenuous, inflammatory comments? They're "not even wrong", yet they just suck energy right out of productive discussion as people inevitably respond to one part of your broken framing, and then they're off to the races arguing about nonsense.
The main problem with the law not being applied evenly is structural - how do you get the people tasked with enforcing the law to enforce the law against their own ingroup? "AI" and the surveillance society will not solve this, rather they are making it ten times worse.
I want to share my opinion even if I know that it may not be a popular one on HN. I am not trying to maximize my reputation by always posting what I believe will get the most upvotes, but instead I prioritize sharing my opinion.
>people inevitably respond to one part of your broken framing, and then they're off to the races arguing about nonsense.
I agree that this unproductive. When people have two very different viewpoints it is hard for that gap to be bridged. I don't want to lay out my entire world view and argument from fist principals because it would take too much time and I doubt anyone would read it. Call it low effort if you want, but at least discussions don't turn into a collection of a single belief.
>how do you get the people tasked with enforcing the law to enforce the law against their own ingroup?
Ultimately law enforcement is responsible to the people so if the people don't want it then it will be hard to change. In regards to avoiding ingroup preference it would be worth coming up with ways of auditing cases that are not being looked into and having AI try to find patterns in what is causing it. The summaries of these patterns could be made public to allow voters and other officals to react to such information and apply needed changes to the system.
I think a good first step to policing the police is to have any use of violence by law enforcement be put to trial in court. They would have all of the same constitutional protections as any other defendant and "I was an officer of the law carrying out my duty" would be a reasonable mitigating factor. There would be no need to jail them or require bond or arraignment or any of that, but they would have to show up for the trial and demonstrate why use of force was necessary.
They're "not even wrong", yet they just suck energy right out of productive discussion
You answered your own question - it's straight up bait.
LE has been getting increasingly advanced technology over the years. The only thing that’s increased is their ability to repress and oppress.
Go lick boots elsewhere.
Skynet. "You only postponed it. Judgment Day is inevitable."
Crafted by Rajat
Source Code