This is interesting, the article doesn't say exactly the reasons behind it but I would suspect the latest wave of North Korean plants are doing the rounds ?
Also it's crazy how it's not that easy to find a good job any more and yet there are these (quite obvious fakes, maybe in hindsight) who are almost getting offers over badly deepfaked video calls. Saying you're from Poland, saying you got your degree in Poland but not being able to speak Polish ?
In a way, it's incredibly hilarious that due to AI, modern companies have to essentially resort to what used to be a common way among 4channers to ensure someone is a real person — ask them to put a shoe on their head.
> Return of in-person final rounds? This looks an obvious consequence of these incidents.
Well, even before AI we had qualified people being paid to go to interviews for scammers. Or qualified candidates outsourcing their jobs after being hired. An unfortunate downside of remote work really — you can’t be sure of who’s doing the work if they don’t have to come in (and I say this as someone who’s just about to start working on a fully remote, international team). AI has made fraud massively cheaper though.
The claim is that remote work is bad because it's hard to tell if someone is really doing work. If I'm on site and fucking around, unless you're actively watching my screen how would you know?
The point is that I reject this issue is unique to remote work. And thus shouldn't be used as an argument against remote work.
So much this. It's not news this is a thing, it's a problem of a cost -- if there is not way to remotely id a person, remote company has to pay for a flight ticket and is at disadvantage compared to competitor who hires locally when it comes to hiring costs.
> So much this. It's not news this is a thing, it's a problem of a cost -- if there is not way to remotely id a person, remote company has to pay for a flight ticket and is at disadvantage compared to competitor who hires locally when it comes to hiring costs.
As long as the fully loaded ongoing cost of remote workers is lower than the fully loaded cost of in-person workers, you'll still come out ahead to send people an airplane ticket, hotel room, and meals. And long term I fully expect remote hiring/identity validation as a service companies to emerge.
Someone willing to lie to this extent can be assumed to not actually be as productive. Any “productivity” metric they hit can also be assumed to be faked or gamed while on the job.
But, narrowing to just “productivity-wise” is moving the goalposts. I want to work with honest people full stop.
Usually (but not always) the complaint with such scams is the work is being “done” at such low quality that it doesn’t match what’s demonstrated in the interview process; if it’s done at all, that is.
I don't know how it works overseas, but it sounds like this is a plus for US companies hiring US employees. According to the laws the company is required to fully ID you before hiring, which means either:
1) Hopping on a video call and placing the passport or (driver's license + birth cert) in front of the camera.
2) Going to an in-person business that the company is friendly with to have their HR verify you.
Both of these usually involve sending a photo of the documents in addition to the in-person/video verification.
#1 is very popular, and in a legal grey area ever since COVID. More standardized/strict businesses (banking, university) will require #2. Either one of those options kills this outright (ignoring the other billion red flags they should have seen pre-interview).
Along with a background check, which even the crappy ones will verify using things like W2s, I don't really see this as a concern, unless you've got sketchy hiring practices and hire in countries you aren't familiar with.
If you can't manage to find something as generic as a backend engineer or web developer in your own network, you have no business running a tech company. I feel sorry for whatever investors hand someone money and their first thought is "Let's hire some foreign developers" -- probably paying them on the cheap. The original sin (and scam) is these founders taking investment capital and unqualified to run a company.
A solution can be the same one used for proctored exams to obtain some certifications?
You have to come to a certified center next to your home, where they'll check your credentials and put you in front of a certified computer with cameras recording everything.
Since they're local, they know the customs, they know which credentials to check, and they can't place an AI in front of a computer.
I remember someone in a HK company sent like $20M to scammers because he was instructed in what he thinks is a board of seniors but actually AI by scammers. It was like last year?
>the candidate did not speak any Polish, but was based there and graduated from Warsaw University of Technology. He spoke in broken English, and with a very strong accent that sounded Asian, but these weren’t warning big red flags, and the candidate sounded motivated.
This is ridiculous. According to their resume they lived, studied, and worked in Poland for over 10 years and don't speak Polish? This isn't considered a red flag? This was a screening call and the head of operations categorically failed.
>Communication was a bit choppy, and the candidate did not speak Serbian, despite graduating from the University of Kragujevac, in Serbia. Once again, they sounded motivated, so proceeded to the next round.
Well it is technically possible to spend 10 years in a bubble, either academia or professional one, and not learn local language. If your colleagues speak all english, you have non-native partner and don't have motivation to learn it won't come on its own. Not great approach but perfectly realistic.
Don't judge easiness of learning language based on English - its one of the easiest out there to learn (I speak 5 languages and english is absolutely trivial compared to others), especially for folks with similar base (ie roman/germanic ones). Slavic languages are much harder to learn and basically useless outside given country (with few exceptions but usable mostly for C2 or natives level).
Also, compared to many other languages even very basic level of english gets you somewhere, in some languages like French natives basically refuse to speak to you or simply don't understand you unless you have cca B2 level (this is personal experience of me and folks around me where I live)
You've never been to France, right? Maybe this was true 20 years ago, but it is no longer the case. Where I work, in France, there are people who've been here for 10 years who still can't speak the language, which I find pretty depressing. Especially that learning French when you already speak English is pretty easy. I met enough people that managed to become fluent in French in six months to know that it is really not that hard when you put a bit of effort in it. What it usually shows is a lack of curiosity for your environment, which is a big red flag about the social skills about the person.
> in some languages like French natives basically refuse to speak to you
I've never seen this happen, but I do notice a lot of tourists asking for services or starting conversations without saying 'hello' or 'bonjour,' which is considered disrespectful in France
>Slavic languages are much harder to learn and basically useless outside given country (with few exceptions but usable mostly for C2 or natives level).
Slavic languages are much harder for you. For speakers of other Slavic languages it's about a year of passive exposure to be somewhat conversational.
Have a second thought before following all these suggestions.
> Ask the candidate to do things that AI filters cannot handle – yet, such as:
> * A face turning side to side
With all AI ethical practices being set by unscrupulous people right now, and a gold rush of everyone else ... is an "employer" asking me to do this actually just trying to get a model to impersonate me?
Or it it an actual company, but they want training data for their AI snake oil?
Or is it a legitimate company who are being honest, but the video service they're using is going to abuse that video data (whether or not the ToU permits that, because legal isn't stopping anyone in this gold rush)?
> * standing up and walking backward from the camera, showing more body detail
> In the circumstances, It should be justifiable to run through some or all of these tests, as well as asking for all filters to be turned off.
If we stop and ask ourselves, "What would an HR expert say?"...
Well, the poor professional decorum, and setting a suspicious tone and imbalanced dynamic, the HR person might not complain about, but...
What I think think they might complain about first is us setting up the company for discrimination lawsuits and public outrage:
* What if your request comes off as sexually objectifying and/or discriminating on their basis of their physique? (Basically, "Now how about you walk back and give us a little twirl, cupcake?" or "Bro, do you even lift?")
* What if the person has mobility problems that aren't relevant to the software job?
* What if the person is trans and using filters to help them pass, due to well-known discrimination and abuse?
* What if the person routinely uses filters for a scar or a skin condition they're self-conscious about, and maybe has trauma around that? Or because they have a really unflattering camera on their personal laptop? Are they going to feel humiliated or harassed?
I'm wondering whether some of the suggestions for countermeasures might be considered OK because the scenarios they'd just talked about are of candidates in countries that can be paid a lot less for the same work. And therefore maybe thinking of them as the other or lesser, and not due the same respect we'd hopefully extend to people in our own country or another affluent country?
Yes, if you add problematic facts to a situation, the situation becomes discriminatory.
Nothing in the article suggests calling a candidate cupcake, etc. The subjects of the article are super understanding - allowing people to continue in an interview process despite seeming to not know the native language.
What is the motivation for making up wacky hypos to discredit the relatively trivial suggestions in the newsletter?
I'm suggesting that's what would come to HR's mind, as a way that could come across, and risk lawsuits and/or social media outrage.
(Obviously someone isn't going to literally say "cupcake" in 2025. That doesn't mean that's not how it will come across. And it also doesn't mean that your employees won't mess up, and do your HR-vetted reasonable process in a totally wrong way.)
In most Western countries, an act that is discriminatory at face value isn't a crime if there is a good reason for it. Here in the UK, for example, it's called "a proportionate means of achieving a legitimate aim". Avoiding demonstrable fraud would be a legitimate aim and I can't see any court or tribunal disagreeing with it.
This is why I'm not particularly worried about working remote/WFH from the US. I've heard the scare tactic that "your manager will move your job overseas for cheaper!" - but inevitably most companies have no idea how to do that successfully without running into issues like the article. It's just easier for them to deal with and verify US (or Canadian) employees.
I guess at some point AI deepfakes will get so good that they look, sound, and act like a local dev while zooming and working with them throughout the day - but then I suppose what's the problem exactly?, ha.
Company doesnt have to do it successfully or even stay afloat to impact you. Your employer doesn't have to do anything at all for you to be impacted, as long as their competitors do -- your wages still be supressed.
> - but then I suppose what's the problem exactly?, ha.
the problem is, companies want to or required to discriminate based on origin and residency status, but ai nerfs their ability to do so (cheaply at least)
> Company doesnt have to do it successfully or even stay afloat to impact you
True, but from my experience the companies that are dazzled by cheap overseas prices tend to be awful places to work to begin with, and I'm kind of happy for them to stumble around trying to make it work out.
The point for most is to get the paychecks. Either you think you can skate by in the real job but can't get past the interviews or you're just straight up scamming and will keep as many paychecks coming in for as long as possible. I would guess mostly the former.
Loiking at this specific company -- maybe steal the information about security bugs they discovered and get a free zero day. Sort if a security parasitism that doesn't mine their own zerodays the hard way.
Other less fancy reasons -- just getting a few months of wages and running away.
This is interesting, the article doesn't say exactly the reasons behind it but I would suspect the latest wave of North Korean plants are doing the rounds ?
Also it's crazy how it's not that easy to find a good job any more and yet there are these (quite obvious fakes, maybe in hindsight) who are almost getting offers over badly deepfaked video calls. Saying you're from Poland, saying you got your degree in Poland but not being able to speak Polish ?
In a way, it's incredibly hilarious that due to AI, modern companies have to essentially resort to what used to be a common way among 4channers to ensure someone is a real person — ask them to put a shoe on their head.
Companies use AI to filter candidates, Candidates use AI to apply to jobs...Dont miss the next episode...
...or, hold a fish.
> Return of in-person final rounds? This looks an obvious consequence of these incidents.
Well, even before AI we had qualified people being paid to go to interviews for scammers. Or qualified candidates outsourcing their jobs after being hired. An unfortunate downside of remote work really — you can’t be sure of who’s doing the work if they don’t have to come in (and I say this as someone who’s just about to start working on a fully remote, international team). AI has made fraud massively cheaper though.
How do you know the person in the office hasn’t outsourced their work and they’re just sitting there looking busy?
Because you can tell?
What is the purpose of this argument other than to justify deceit?
The claim is that remote work is bad because it's hard to tell if someone is really doing work. If I'm on site and fucking around, unless you're actively watching my screen how would you know?
The point is that I reject this issue is unique to remote work. And thus shouldn't be used as an argument against remote work.
So much this. It's not news this is a thing, it's a problem of a cost -- if there is not way to remotely id a person, remote company has to pay for a flight ticket and is at disadvantage compared to competitor who hires locally when it comes to hiring costs.
> So much this. It's not news this is a thing, it's a problem of a cost -- if there is not way to remotely id a person, remote company has to pay for a flight ticket and is at disadvantage compared to competitor who hires locally when it comes to hiring costs.
As long as the fully loaded ongoing cost of remote workers is lower than the fully loaded cost of in-person workers, you'll still come out ahead to send people an airplane ticket, hotel room, and meals. And long term I fully expect remote hiring/identity validation as a service companies to emerge.
Productivity-wise speaking, if the job is being done, why would you care who is doing it?
Someone willing to lie to this extent can be assumed to not actually be as productive. Any “productivity” metric they hit can also be assumed to be faked or gamed while on the job.
But, narrowing to just “productivity-wise” is moving the goalposts. I want to work with honest people full stop.
Economically speaking, if the job could be done by AI, why would we pay a person to do it?
Usually (but not always) the complaint with such scams is the work is being “done” at such low quality that it doesn’t match what’s demonstrated in the interview process; if it’s done at all, that is.
Bad actors won't be respecting confidentiality. You're potentially giving adversaries dev access to your systems, and a copy of the source code.
Even if they're just in it to make a quick buck and then disappear, this kind of internal info is another thing to sell on.
I don't know how it works overseas, but it sounds like this is a plus for US companies hiring US employees. According to the laws the company is required to fully ID you before hiring, which means either:
1) Hopping on a video call and placing the passport or (driver's license + birth cert) in front of the camera. 2) Going to an in-person business that the company is friendly with to have their HR verify you.
Both of these usually involve sending a photo of the documents in addition to the in-person/video verification.
#1 is very popular, and in a legal grey area ever since COVID. More standardized/strict businesses (banking, university) will require #2. Either one of those options kills this outright (ignoring the other billion red flags they should have seen pre-interview).
Along with a background check, which even the crappy ones will verify using things like W2s, I don't really see this as a concern, unless you've got sketchy hiring practices and hire in countries you aren't familiar with.
> According to the laws the company is required to fully ID you before hiring
Is that new? I have never showed an id to an employer including when I worked in the US.
You’re probably just forgetting.
Been a thing since 1986 https://www.wikipedia.org/wiki/Form_I-9
Maybe. I think I filled out such a form with the passport number etc. It is just that I don't think I showed anyone my passport.
If you can't manage to find something as generic as a backend engineer or web developer in your own network, you have no business running a tech company. I feel sorry for whatever investors hand someone money and their first thought is "Let's hire some foreign developers" -- probably paying them on the cheap. The original sin (and scam) is these founders taking investment capital and unqualified to run a company.
A solution can be the same one used for proctored exams to obtain some certifications?
You have to come to a certified center next to your home, where they'll check your credentials and put you in front of a certified computer with cameras recording everything.
Since they're local, they know the customs, they know which credentials to check, and they can't place an AI in front of a computer.
It wasnt even about certifications, just identity.
Global remote identity vrrification is not a solved problem, unless you a government and identify your own tax payers, but even that is not universal
A big (but not only) point of going into a testing center is to make sure that the person taking the test is the person they say they are.
That assumes the testing center doesnt take bribes
I remember someone in a HK company sent like $20M to scammers because he was instructed in what he thinks is a board of seniors but actually AI by scammers. It was like last year?
>the candidate did not speak any Polish, but was based there and graduated from Warsaw University of Technology. He spoke in broken English, and with a very strong accent that sounded Asian, but these weren’t warning big red flags, and the candidate sounded motivated.
This is ridiculous. According to their resume they lived, studied, and worked in Poland for over 10 years and don't speak Polish? This isn't considered a red flag? This was a screening call and the head of operations categorically failed.
>Communication was a bit choppy, and the candidate did not speak Serbian, despite graduating from the University of Kragujevac, in Serbia. Once again, they sounded motivated, so proceeded to the next round.
So no actual lessons learned from the last time.
>This is ridiculous. According to their resume they lived, studied, and worked in Poland for over 10 years and don't speak Polish?
Could be russian. :D We have people living here for 60+ years and cannot speak local language.
Not even that, the names used by impersonators are quite suggestive of being natives of those countries.
They are an IT Security company, what did you expect?
Absense of "soft" skills and weird logical process of course
AI slop security company, article is just advertisement.
Well it is technically possible to spend 10 years in a bubble, either academia or professional one, and not learn local language. If your colleagues speak all english, you have non-native partner and don't have motivation to learn it won't come on its own. Not great approach but perfectly realistic.
Don't judge easiness of learning language based on English - its one of the easiest out there to learn (I speak 5 languages and english is absolutely trivial compared to others), especially for folks with similar base (ie roman/germanic ones). Slavic languages are much harder to learn and basically useless outside given country (with few exceptions but usable mostly for C2 or natives level).
Also, compared to many other languages even very basic level of english gets you somewhere, in some languages like French natives basically refuse to speak to you or simply don't understand you unless you have cca B2 level (this is personal experience of me and folks around me where I live)
You've never been to France, right? Maybe this was true 20 years ago, but it is no longer the case. Where I work, in France, there are people who've been here for 10 years who still can't speak the language, which I find pretty depressing. Especially that learning French when you already speak English is pretty easy. I met enough people that managed to become fluent in French in six months to know that it is really not that hard when you put a bit of effort in it. What it usually shows is a lack of curiosity for your environment, which is a big red flag about the social skills about the person.
> in some languages like French natives basically refuse to speak to you
I've never seen this happen, but I do notice a lot of tourists asking for services or starting conversations without saying 'hello' or 'bonjour,' which is considered disrespectful in France
>Slavic languages are much harder to learn and basically useless outside given country (with few exceptions but usable mostly for C2 or natives level).
Slavic languages are much harder for you. For speakers of other Slavic languages it's about a year of passive exposure to be somewhat conversational.
Does this article not sound overly paranoid or perhaps a little made up to anyone else?
Have a second thought before following all these suggestions.
> Ask the candidate to do things that AI filters cannot handle – yet, such as:
> * A face turning side to side
With all AI ethical practices being set by unscrupulous people right now, and a gold rush of everyone else ... is an "employer" asking me to do this actually just trying to get a model to impersonate me?
Or it it an actual company, but they want training data for their AI snake oil?
Or is it a legitimate company who are being honest, but the video service they're using is going to abuse that video data (whether or not the ToU permits that, because legal isn't stopping anyone in this gold rush)?
> * standing up and walking backward from the camera, showing more body detail
> In the circumstances, It should be justifiable to run through some or all of these tests, as well as asking for all filters to be turned off.
If we stop and ask ourselves, "What would an HR expert say?"...
Well, the poor professional decorum, and setting a suspicious tone and imbalanced dynamic, the HR person might not complain about, but...
What I think think they might complain about first is us setting up the company for discrimination lawsuits and public outrage:
* What if your request comes off as sexually objectifying and/or discriminating on their basis of their physique? (Basically, "Now how about you walk back and give us a little twirl, cupcake?" or "Bro, do you even lift?")
* What if the person has mobility problems that aren't relevant to the software job?
* What if the person is trans and using filters to help them pass, due to well-known discrimination and abuse?
* What if the person routinely uses filters for a scar or a skin condition they're self-conscious about, and maybe has trauma around that? Or because they have a really unflattering camera on their personal laptop? Are they going to feel humiliated or harassed?
I'm wondering whether some of the suggestions for countermeasures might be considered OK because the scenarios they'd just talked about are of candidates in countries that can be paid a lot less for the same work. And therefore maybe thinking of them as the other or lesser, and not due the same respect we'd hopefully extend to people in our own country or another affluent country?
Yes, if you add problematic facts to a situation, the situation becomes discriminatory.
Nothing in the article suggests calling a candidate cupcake, etc. The subjects of the article are super understanding - allowing people to continue in an interview process despite seeming to not know the native language.
What is the motivation for making up wacky hypos to discredit the relatively trivial suggestions in the newsletter?
I'm suggesting that's what would come to HR's mind, as a way that could come across, and risk lawsuits and/or social media outrage.
(Obviously someone isn't going to literally say "cupcake" in 2025. That doesn't mean that's not how it will come across. And it also doesn't mean that your employees won't mess up, and do your HR-vetted reasonable process in a totally wrong way.)
I'd say it's fair enough given the hint of racism/xenophobia in the article itself
In most Western countries, an act that is discriminatory at face value isn't a crime if there is a good reason for it. Here in the UK, for example, it's called "a proportionate means of achieving a legitimate aim". Avoiding demonstrable fraud would be a legitimate aim and I can't see any court or tribunal disagreeing with it.
This is why I'm not particularly worried about working remote/WFH from the US. I've heard the scare tactic that "your manager will move your job overseas for cheaper!" - but inevitably most companies have no idea how to do that successfully without running into issues like the article. It's just easier for them to deal with and verify US (or Canadian) employees.
I guess at some point AI deepfakes will get so good that they look, sound, and act like a local dev while zooming and working with them throughout the day - but then I suppose what's the problem exactly?, ha.
Company doesnt have to do it successfully or even stay afloat to impact you. Your employer doesn't have to do anything at all for you to be impacted, as long as their competitors do -- your wages still be supressed.
> - but then I suppose what's the problem exactly?, ha.
the problem is, companies want to or required to discriminate based on origin and residency status, but ai nerfs their ability to do so (cheaply at least)
> Company doesnt have to do it successfully or even stay afloat to impact you
True, but from my experience the companies that are dazzled by cheap overseas prices tend to be awful places to work to begin with, and I'm kind of happy for them to stumble around trying to make it work out.
What's the point of AI faking? Steal IP? Gain access to internal systems?
Collecting a single paycheque makes it worth it, especially if you're doing it with a bunch of companies all at the same time.
The point for most is to get the paychecks. Either you think you can skate by in the real job but can't get past the interviews or you're just straight up scamming and will keep as many paychecks coming in for as long as possible. I would guess mostly the former.
Loiking at this specific company -- maybe steal the information about security bugs they discovered and get a free zero day. Sort if a security parasitism that doesn't mine their own zerodays the hard way.
Other less fancy reasons -- just getting a few months of wages and running away.
[dead]