It doesn't look like you're disputing the specific allegations about doxxing made by ITWire - only mentioning other, unrelated, vague things like "tabloid pieces". There's a Twitter screenshot showing the doxxing - are you claiming that that's fabricated?
Same happened with Crosstalk in a similar fashion. Commented on a video about a product similar to what he was describing and he removed the comment. It’s all about them and the products they want to sell. Not as open/transparent as one would think.
Krebbs doxxed one of my friends who happens to be a gay hacker in a country where both being gay and being a hacker are illegal. Krebbs doesn’t take kindly to any sort of bad reviews or even the hint that he’s not perfect FYI.
Funny, notdan also advocated for an infamous blind neo-Nazi hacker recently released from prison be platformed and given an opportunity to "tell his story" which coincidently involved swatting myself and my family (among months of other forms of harassment). When I mentioned that fact, notdan replied that it would make for an interesting podcast and that we would benefit from "his perspective".
I don't know anything about the case or the current attitude of the blind neo-Nazi. Depending on the later, it could be very beneficial to see how someone could get sucked into a such mental state. Hearing how such a thing happends might be beneficial for those who are just starting that slide into exremism.
Even if the neo-nazi is still dyed in the wool, I certainly don't think that being curious about that person's perspective justifies being doxed.
That’s pretty shitty of him. I’m not saying anyone is a saint, only that krebs doxxes people who disagree with him, which can put their lives in direct harm. I’m not any more of a fan of wee ve than you.
They should have definitely acknowledged they covered this before. But also to be fair, quoting the article the reputation damage was done:
“Following the publication of these articles, between Tuesday, March 30, 2021 and Wednesday March 31”.
And the Krebs article was on April 4th. It seems BleepingComputer broke the story and Krebs just re-reported the news.
Whoops I stand corrected. Wish I could edit my old comment. Looks like Kreb's definitely talked to the guy now indicted in their March 30th article. That makes the complete lack of acknowledgement quite a bit more damning.
I give Krebs a little credit here because the "whistleblower" from his original article was really an insider that was part of the investigation into the breach. Obviously this source was also the hacker, but knowing that was impossible.
Now, I believe Krebs should at least acknowledge he made this mistake, sadly he hasn't here yet.
For me a company of their size and, what I would expect, maturity, this new announcement does not satisfy me or provide me much assurance. Consequently I am still happy I have been recommending people against Ubiquiti since the original announcement from Krebs.
* Why was it so easy for a lead engineer to get access to a root AWS user without anyone else being notified? I.e. AWS GuardDuty provides FREE alerting for when an AWS root IAM account is logged in or used, this account should be under lock and key and when used, confirmed and audited by relevant persons or teams.
Start edit/
* Furthermore on the root account being easily accessed, the root account in the companies I've worked at had MFA enabled, and the QR code is locked in a safe only accessible by two people agreeing it needs to be accessed in a break glass situation, where warranted.
/End edit
* Why was he also able to delete critical CloudTrail logs and reduce their retention to 1 day? I.e. These logs should be in a S3 bucket or other environment where such changes cannot be made. Alternatively, they should be shipped to a redundant service that manages this risk to prevent data deletion
* Why did Ubiquti not announce they were compromised sooner? The hack started in early December, Ubiquiti noticed the compromise on Dec. 28. Ubiquiti told the market on January 11th. Is that a satisfactory turn around? Giving them some credit for the XMas break I'll say this partially understandable.
All the AWS configuration I'm speaking of above, I would describe as Security 101.
Most of these settings can be set and managed from AWS Organisations for free, and backed up with alarming and alerts for Guard Duty trivially. That a company of Ubiquiti's size and maturity had such basic risks not managed is still a concern.
I understand AWS Organisations can be difficult to set up for legacy AWS accounts, but even with that said, setting the alarms and monitoring up that would help manage the risk associated to the questions above is not difficult and should have been in place.
That Ubiqitui would only find relief ultimately from the developers poor OpSec rather than Ubiquiti's own security policies and procedures provides a commending perspective of their internal security posture.
You assume anyone except the hacker knew anything about AWS.
Our firm subcontracts to other firms. I am among other thing resident Sysadmin which means I get to do everything related to AWS.
Out of 6 contracts we had, where firm had their project on aws I got root credentials in 4 cases. out of 4 i only managed to convince 2 to take their account security more seriously and lock things down.
What I am trying to say is that what happened at Ubiquiti is far more comon than people realize.
Because when all is said and done you still need someone who thinks like sysadmin.
Frankly after reading the other comments in this chain, as well as experience in other businesses and industries. You're likely correct. I'm giving too many companies far too much credit.
> You assume anyone except the hacker knew anything about AWS.
There were a lot of smart and experienced people at Ubiquiti when I worked there.
Nick Sharp manipulated his way into total control over everything and wouldn't let anyone outside of his isolated team touch it. Nick was hired out of his job at Amazon because he was supposed to be the AWS expert. He used that to lock out anyone but himself and a trusted team.
It didn't matter that others knew better. Nick controlled it.
> Nick was hired out of his job at Amazon because he was supposed to be the AWS expert.
This always cracked me up. From what I can tell, he was a mid level dev on the Alexa web api team. He knew AWS sure, but he did not have the cred at all to justify the position and responsibility he was given at Ubiquiti.
While I agree this is demonstrative of overall less than adequate security practices, I’m unsurprised that a company that started with making hardware and only later added cloud functionality beyond a website/store was not initially setup as you described. What you described is the sort of setup I’ve been involved in the transition from what Ubiquiti has and what you just described, and it can be quite a lot of work to make that transition, if the industry involved didn’t have regulatory risk management drives, it’s entirely possible that it would have been considered more expensive than it was worth.
Ubiquiti now has public evidence their security posture is inadequate and there will be pressure for them to demonstrate they have changed this situation.
It’s very difficult to completely prevent malicious actors engaging in deliberate efforts to infiltrate and plan actions like this once they have any measure of access and trust. What matters now is how they respond to it.
It is next to impossible to prevent malicious actions from insiders. But it is a well understood risk and there are plenty of canaries and deterrents possible that it should stop all but the most dedicated adversaries. Those same measures will notify multiple other people when it happens so you can act accordingly.
> * Why was it so easy for a lead engineer to get access to a root AWS user without anyone else being notified? I.e. AWS GuardDuty provides FREE alerting for when an AWS root IAM account is logged in or used, this account should be under lock and key and when used, confirmed and audited by relevant persons or teams.
I had never heard of GuardDuty so I got curious.
It doesn't seem like there's a free tier available for it: https://aws.amazon.com/guardduty/pricing/
In fact at larger scales it seems pretty expensive even.
You're absolutely right. I had not noticed. I'd say this was to do with GuardDuty isn't free but has consistently been such a small line item on my AWS bills that I thought it was free.
Thank you for the correction.
All the same the point I was trying to drive is when the service is incredibly effective, both in practice and cost, it should in nearly all instances be implemented.
Edit: I wanted to amend my statement but the window to make those changes has passed.
I agree it reflects poorly on the organization, but the previous story on HN [1] mentioned that the malicious insider had scared the CEO into giving access by manufacturing some sort of security incident. Had this not happened, he would not have been able to pull it off IIRC.
That would also point to the CEO being too powerful when it comes to security, which is also a knock against the organization, but in a different way.
Hoo boy, this is gonna be a fun one. For reference, I spent a year (mid-2018 to mid-2019) running the UniFi Network team and worked with Nick during that time.
> * Why was it so easy for a lead engineer to get access to a root AWS user without anyone else being notified? I.e. AWS GuardDuty provides FREE alerting for when an AWS root IAM account is logged in or used, this account should be under lock and key and when used, confirmed and audited by relevant persons or teams.
The "Cloud Lead" that Nick took over from gave zero fucks. He ran all the AWS stuff for Ubiquiti under his personal AWS account. Nick came in and started putting "proper" AWS structure and security in place, primarily by scaring Robert (the CEO) into giving him the keys to the castle (my own personal opinion of Robert is... not the greatest).
One thing to understand about Ubiquiti (at least during those times) is that the company had zero C-level execs. There was Robert.... and then nobody knows. I asked repeatedly why we didn't have a CTO, or a COO, or a CFO, or CMO or ANYTHING and I got nothing but shrugs and "idunno" as a response for the whole year I was there.
So when Nick came in, a very... let's just say "forceful" personality, he immediately won over Robert and ended up with carte blanche over pretty much all of Ubiquiti's cloud accounts. Which were basically... everything. All the UniFi Network services, UniFi Protect services, you name it. If it was connected to the cloud in any way, Nick had access to it.
So why wasn't anybody else notified? Simple. Because he was basically "god". If anybody was gonna be notified, it would've been Nick. He was the top of the totem pole company-wide when it came to AWS.
Also, for some perspective, at that time Ubiquiti kept all the hardware signing keys in a private GitHub repo that every employee had read access to. And they were in plain-text. So... yeah.
> * Furthermore on the root account being easily accessed, the root account in the companies I've worked at had MFA enabled, and the QR code is locked in a safe only accessible by two people agreeing it needs to be accessed in a break glass situation, where warranted.
See above for the quality of security processes and practices this company had in place.
> * Why was he also able to delete critical CloudTrail logs and reduce their retention to 1 day? I.e. These logs should be in a S3 bucket or other environment where such changes cannot be made. Alternatively, they should be shipped to a redundant service that manages this risk to prevent data deletion
See above. (re: "god") Nick answered only to Robert. And he'd already successfully hoodwinked him. He could do whatever he wanted. Eventually he fell from Robert's good graces, but seeing as Ubiquiti as a company didn't really have a ton of checks and balances, he kept his god-level access far longer than he should've.
> * Why did Ubiquti not announce they were compromised sooner? The hack started in early December, Ubiquiti noticed the compromise on Dec. 28. Ubiquiti told the market on January 11th. Is that a satisfactory turn around? Giving them some credit for the XMas break I'll say this partially understandable.
Simple. Fear of share price falling. I was constantly given this as a reason we couldn't be transparent. Not by Robert, nor where he could hear. But it was pretty much well known that the company kept shit quiet for fear of the share price dipping.
> All the AWS configuration I'm speaking of above, I would describe as Security 101.
To keep with the metaphor, Ubiquiti couldn't even get Pre-school level security in place, much less 101. I have no idea how something even more massive hasn't happened yet. Must be dumb luck.
Speaking of, by the time I left the company, the team that was handling the door entry-way systems (UniFi "Access" I guess) had been caught with numerous security issues, not the least of which was logging user credentials in plain text (not just storing, but logging, in response to authentication events). They were also based in China and subject to Chinese laws around government access, so take that how you will.
And that doesn't really even cover most of it. That year took a toll on my physical, mental, and emotional health, not to mention put a crazy strain on my marriage. I'd rather honestly forget it, but the schadenfreude of what's going on is too delicious to ignore.
>Also, for some perspective, at that time Ubiquiti kept all the hardware signing keys in a private GitHub repo that every employee had read access to. And they were in plain-text. So... yeah.
This is frankly worse than any of this other news. So there's essentially zero trust associated with the code signatures since any employee, past or present, can sign a payload. Wonderful.
I've since heard that the repo has been taken down and all the keys rotated, but just kinda makes you wonder how many APs and switches and cloud keys, etc are still out there using compromised keys.
Also, even though they may have had read access, not many knew it existed. But it wasn't super hard to find (I stumbled across it basically).
Oh and then there the whole metrics collection debacle, where the controller basically phoned home about the topology of every network that it managed. Even if you opted out. Opting out just meant they fuzzed your ID so any given record couldn't be linked back to PII. Which may or may not be legal, IANAL.
But either way it definitely wasn't clear that opting out meant data was still collected. Super sketchy.
> Also, even though they may have had read access, not many knew it existed. But it wasn't super hard to find (I stumbled across it basically).
We didn't have read access until Nick Sharp and his team took over GitHub permissions and gave everyone access. Wonderful security work.
> Oh and then there the whole metrics collection debacle, where the controller basically phoned home about the topology of every network that it managed. Even if you opted out. Opting out just meant they fuzzed your ID so any given record couldn't be linked back to PII. Which may or may not be legal, IANAL.
Nick Sharp was at the core of this too! He built the 'trace' system to collect all of these metrics and had all of these ideas about how to secretly collect the data in ways that would be hard for people to detect.
He pretended to be a principled person who stood for security and privacy, but whenever he saw an opportunity for political gain he abandoned all principles. He was the only person I knew at the company who was enthusiastic about collecting all of that data.
Oh god don't remind me about Trace. I had to deal with the Controller side of that and it was a damn nightmare.
He basically dictated that you couldn't use any kind or repo+deployment pipeline except for what his team was building. Which wasn't actually functional for like 8 months. So we never even got a dev or staging tier to test against for months.
And then when I ended up with access to push things along, the actual apps for the trace system we're... not well implemented.
Ugh... I could bitch about this stuff for literal days but I gotta drop my kids off.
Oh hai people who used to work at UBNT. From reading your responses here (and elsewhere) it definitely seems our paths did not cross, but the shared sense of Schadenfreude is good and strong :)
The usual answer I've found to this question is Microtik, they strike a similar balance between enterprise feature-set and more consumer level price point as Ubiquiti sort of aims for with the Unifi line.
The quality/feature set is there and the software is well designed, even if not quite as networking beginner-friendly as Unifi has become. Mikrotik's RouterOS can do much the same tasks as Unifi's management console, and can configure for auto-adoption of APs/other hardware in the Mikrotik range just like Unifi does for their own hardware.
Most competitors (I see Aruba suggested) are priced much more into the enterprise/business buyer realm. Unifi has generally been keenly priced in this market, their latest Wifi 6 APs are just 99 dollars each (when in stock of course...). Microtik's pricing is generally comparable or cheaper than Unifi in my experience.
Secondhand Ruckus APs are a pretty decent alternative, you'll have a hard time getting AX gear for a reasonable price though.
edit: Secondhand Ruckus/Brocade switches are solid, at least on the 7000 series the evaluation key has no time limit so you're not license-limited in what you can do with them. Switches are mostly <$250 on eBay if you're buying an ICX7150, ICX7250, etc. Yes, that includes PoE models.
Is it just software that's UniFi's weakness? Anything wrong with the hardware itself? I've had quite good luck with UniFi in my home myself but perhaps I'm not using all the features...
> For reference, I spent a year (mid-2018 to mid-2019) running the UniFi Network team and worked with Nick during that time.
Nick's whole strategy was to find a problem, exaggerate it as much as he could get away with, and then offer himself as the hero who would fix it all.
He exaggerated or lied about everything he wanted to use for political advantage, right up to the end where he fabricated a hack and used Krebs to exaggerate it as much as possible for his own personal profit.
You have to realize he did the same thing during his time at Ubiquiti: Found problems he could use for political advantage, exaggerated them as much as he could get away with, and then amplified his lies until they were gospel. A lot of what you're saying has some roots in truth, but I can tell you have the exaggerated Nick Sharp version of events.
> There was Robert.... and then nobody knows. I asked repeatedly why we didn't have a CTO, or a COO, or a CFO, or CMO or ANYTHING and I got nothing but shrugs and "idunno" as a response for the whole year I was there.
This wasn't some big mystery. Everyone knew that Robert ran everything as CEO and the legal, marketing, and other teams operated out of the New York office.
> Nick came in and started putting "proper" AWS structure and security in place, primarily by scaring Robert (the CEO) into giving him the keys to the castle
Nick was hired specifically to run AWS. That was his job from the beginning. The old cloud team quit and Nick was recruited from his job at Amazon because supposedly he was an AWS expert.
The incident where he scared the CEO was the first of his political games to exaggerate or fabricate security incidents for political gain.
> So why wasn't anybody else notified? Simple. Because he was basically "god". If anybody was gonna be notified, it would've been Nick. He was the top of the totem pole company-wide when it came to AWS.
Yes, this. All of these news stories are missing the point that Nick was the cloud lead. You don't have to believe anonymous commenters. His LinkedIn profile will confirm it. He was recruited out of Amazon to lead the cloud efforts, but he was in over his head and had severe personal issues.
> at that time Ubiquiti kept all the hardware signing keys in a private GitHub repo that every employee had read access to.
This is another Nick exaggeration. It's true that older devices had hardware signing keys stored in a Git repo before the system was updated and keys rotated. However, those old keys were only accessible by a few people until Nick and his team took over GitHub and restructured permissions with the web portal they built themselves. In the process they made too many repos accessible to too many people.
> To keep with the metaphor, Ubiquiti couldn't even get Pre-school level security in place, much less 101. I have no idea how something even more massive hasn't happened yet. Must be dumb luck.
Ubiquiti's overall structure is far from perfect, but you were only there during the Nick Sharp era. Ubiquiti had a lot of people who took security and proper practices very seriously before Nick Sharp took over everything, but it was also a distributed company with a lot of isolated divisions. Nick Sharp got into power by taking the worst and oldest parts of the company and convincing people that everything was equally bad and that only he could fix it. If you got your security information from Nick Sharp, you'd think that Nick is the only person who can do anything properly at the company.
> Speaking of, by the time I left the company, the team that was handling the door entry-way systems (UniFi "Access" I guess) had been caught with numerous security issues, not the least of which was logging user credentials in plain text (not just storing, but logging, in response to authentication events). They were also based in China and subject to Chinese laws around government access, so take that how you will.
I also heard that, but I think it was just incompetence on their part. Nick was pushing the conspiracy that they were doing something with the Chinese government, but it doesn't follow that they'd do it by sending the data to AWS servers under his control. I think they just made a sloppy prototype to impress the CEO and got caught doing dumb stuff. I do blame the company for not cutting that team off, though. They had no idea what they were doing other than their ability to put together quick prototypes to impress the CEO.
If you're telling me I worked there at literally the worst possible time frame, I'd believe it. I may have my experience skewed through the perspective of Nick's influence, but tbh many of my issues were unrelated to him or his sphere of influence.
The C level thing may not have been a "big" mystery, but it was to me, and as somebody who was running the dev of a flagship software product (UniFi) it set off alarm bells that nobody I talked to could explain who was handling the roles of those execs. I'm not exaggerating when I say I effectively got "I dunno" as a response when I inquired, and I dug.
It is good to know, though, that what I experienced wasn't chronic for the entire company's existence.
To clarify on the China thing, I wasn't trying to imply that anything nefarious was actually happening. Just that it warranted some scrutiny when a security focused product was being developed on the Chinese mainland and by a team of Chinese citizens that are subject to CCP laws. Given some of the things that have happened around that country's involvement in tech in recent years, I don't think such scrutiny is unwarranted, especially when the team has a track record of security "goofs".
> Ubiquiti kept all the hardware signing keys in a private GitHub repo that every employee had read access to
This right here is why I'll never use Ubiquiti gear. These devices are so obviously backdoored and like swiss cheese, they offer the complete opposite of security. Thanks for sharing the true facts.
Those are good points, but Ubiquiti lost me at centralized, private management of my network devices. Someone might, you know, abuse that; plus, I'm not in control of what's on my device.
The problem is that they set a bar there. There is a gulf between consumer and enterprise APs that is full of only Unifi-esque gear. “Cloud management” is an OOB requirement and primary use case for SOHO gear these days. It’s a fleet management issue, just wish it could be done with proper config management and not phoning home. But they’re selling to small shops doing IT for 10s of law firms or doctor’s offices.
I’d never use Ubiquiti’s switches, routers, etc. There are great alternatives there. But when I went to replace my APs earlier this year I still could not find anything less shady that still did what I needed.
Is there an alternative that does PoE w/ multiple APs that hand-off well? And decent hardware…
TP-Link has a similar offering but with similar problems.
OpenWRT on [consumer gear] is not an answer here. More effort goes into the plastic than the hardware. Never dealing with cheap NICs, bad SoCs, inadequate memory, garbage drivers, etc. again.
You don't have to use their cloud services, everything can be managed 100% locally.
With that said I'm moving away from Ubiquity after years of broken pointless updates, years long outstanding bugs, and after this thread obvious massive lacking security.
They’ve pretty much broken that in practice since the last major firmware update.
I just only turn my controller on when I’m working now and occasionally have to re-adopt.
And alternatives likely have similar issues. It’s what naturally occurs when small businesses get large, and especially when companies go from embedded to SaaS development.
It's mostly meant for enterprise use cases. Think something like hotel WiFi. Which is easier to secure? The cloud accounts, or physical access to the several hundred devices in the hotel? Even in private buildings, if you're deploying thousands of access points, you may not trust the minimum wage third party contractors doing all the installations and don't want on-device configuration to be possible.
I've found that every one of these security incidents always has someone come in and say:
"Why didn't they just ..."
Where the problem is that the "..." is the subset of non-default security configurations that would have stopped this specific insider(!) attack. They never mentioned that:
- You can't predict which attack you'll be hit with, so you have to implement every non-default / optional security setting or feature in order to be "protected". This is a metric ass-ton of work, typically on the order of multiple man years of up front effort, and then with ongoing multiple FTEs required (e.g.: for separation of roles).
- While you're not actually under insider attack, this provides zero business benefit that can be measured in dollars.
- The people implementing the protection against insider attack are the same people that the protection would typically be designed to stop. No one person can be trusted to do this. Not even any one team. Just organising the Byzantine security required for this is a project manager's nightmare.
- Insider attacks by senior technical staff are stupidly difficult to protect against. I've never seen any org that could survive an intelligent, motivated attacker that already has significant admin rights. Even the FAANGS would take a significant "hit" in this scenario. Ubiquiti is nowhere near their scale or capabilities in this space.
- All that stuff described like "QR codes in a safe" are hilarious to me. Every time I've tried to implement even 1/10th of that, everyone who ought to be responsible for the secure handling of this kind of key material ran away screaming from any personal responsibility. Literally nobody ever wanted to deal with anything even vaguely similar.
- Many of complex delegation systems and IAM technologies suffer from permission-role inversion. Often the CEO/CIO/CTO has virtually no access to the technology stack, but the junior intern from the subcontractor has the keys to the castle and could delete the whole org for shits and giggles. But, you see, he's not "allowed to". But he has the physical permissions. That he's not supposed to use. Because he was told to. You see?
- Now you're going to say that the senior staff should have the admin rights, and delegate the appropriate rights to the junior staff. Oh, you sweet summer child. That requires responsibility! (see above). It also requires that they learn, manage, and monitor something as technically complex as AWS IAM. But you see, senior people are busy people. They're busy with meetings. With memos. And more importantly, they're busy playing politics and jostling for the next promotion. The fiddly security rule stuff they just delegate to the juniors. That's the ticket to the next pay grade!
I've written this before, but someone on HN posted a link on Ubiquiti Glassdoor reviews where multiple people mentioned that the company is hostile to automated testing. Have things changed? If not, why, just why?
> Why was it so easy for a lead engineer to get access to a root AWS user without anyone else being notified?
Oh my sweet summer child. You haven't worked in large organizations with thousands of employees before, have you? Surely not if you think this is "Security 101".
> Investigators say they were able to tie the downloads to Sharp and his work-issued laptop because his Internet connection briefly failed on several occasions while he was downloading the Ubiquiti data. Those outages were enough to prevent Sharp’s Surfshark VPN connection from functioning properly — thus exposing his Internet address as the source of the downloads.
Not the first time I’ve read about a VPN unable to mask someone’s ip when they were on a wonky connection.
We have to admit the guy is consistent, at the least. He drove employer's security to ruins on purpose, and did the same to himself.
I can't believe someone can literally destroy its life for BTC. Imagine his family and close friends. His parents probably thought he was a tech wizard genius. And now he destroyed his reputation, his employer's, and he'll be behind bars for quite a few years. I hope he doesn't have kids.
And picture: he could have been the guy who did a great job "fixing" employer's lack of security. Have that on his resume, tell lessons learned on real world practice.
Why the hell would he even think about the FBI route in the first place?
> And picture: he could have been the guy who did a great job "fixing" employer's lack of security
This is not really a thing. It's very hard to get recognition or even a shared understanding of the risk mitigation. As everything else in the world, it's way easier to reward someone for things that happen (new functionality) than rewarding someone for preventing things happening (hack).
I completely disagree. You may not get recognition with the people at the front desk, but you absolutely can get recognition from your peers inside a company and (as valuable?) people outside the company. Some of this has to be the result of your own work and marketing - conferences, getting clearance to talk about the work you've done (which is good marketing for the company if sold correctly), speaking engagements, etc. If you think the recognition just comes as automatic as wages, then you're doing it wrong, but are correct in that no much will be forthcoming.
Classic example of the Do Stuff / Tell People You Did Stuff balance. This stuff is arcane and invisible by default. The coordination and communication tasks that surround the direct work are important.
One was to strike a good balance here is a security roadmap. Write down the adverse outcomes that would be a problem for your business. Write down the possible mitigations, how much they cost to implement and how strong they are. Propose a plan that continuously improves security in a cost effective way. Highlight significant things you can’t defend against yet, and explain how you could address them sooner with more funding. Show the plan to leadership, get it approved, and get to work. Every month / quarter / sprint, write down what you did, show where you are the roadmap, and adjust the roadmap to reflect any changes in business priorities.
Is there a good reason for that to not be the default when using a VPN? At the very least it should be easy to configure. I remember when I tried using a VPN on Ubuntu a bunch of years ago, I had to set up iptables rules even though the VPN connection could be configured through the network manager GUI.
But Wireguard itself relies on an active internet connection, which needs to still be set up to not fall back on in case the wg connection deactivates.
In brief, you move your physical eth/wlan device to a new namespace, and create the wg device in that namespace but then move it to the init ns.
By default (and without root) everything will use the init ns and only be able to reach the physical device via wg. If it's not active, nothing will even reach your NIC, nevermind the internet.
+1 to this being the safest way. No physical nic for your traffic to fallback to. The downside is this isn’t supported by any GUIs that I know of so it’s a pretty miserable workflow for WiFi.
Let's say you have a Wireguard configuration, wg0 which runs off ens0. If your wg0 connection dies, for whatever reason (let's say the remote server goes down), your computer falls back to ens0.
What does "not having a connection to be maintained" change about this?
Wireguard will keep contacting the remote server. You'd have to delete the wg0 interface or delete the default route for packets to go out via ens0. Wireguard only has "connection" in a sense that it caches some runtime information about the peer's endpoint, but endpoint configuration is static.
I guess, too much magic automation on top of this is not the best thing for opsec, including having some daemon that can disable your wireguard interface or reconfigure the network if it doesn't like something. You want your network configuration to be static and predictable, regardless of some temporary failures. Basic wireguard kernel primitives will give you that.
> You'd have to delete the wg0 interface or delete the default route for packets to go out via ens0
So... You'd have to do work to properly blackhole traffic when wg0 goes down. However long it takes to reconnect, you still will automatically fall back down to ens0 while it's down unless you do something to stop that.
Packets are always delivered to wg interface as long as it is marked as 'UP' (or 'enabled', if 'UP' sounds to you like having anything to do with some kind of "connection") when your routing table directs them there.
When they hit the wg interface when the other endpoint is unreachable for whatever reason, they are either dropped, or queued, or get ICMP unreachable response generated for them, depending on situation. This is done internally by wireguard.
You have a route to stop that. You route 0.0.0.0/0 through wg0 and it doesn't matter if no traffic returns, it'll keep trying because that's the route that's been defined for wg0.
If you wanted to not go through wg0 (whether it can reach the other wireguard peer or not) you'd have to remove the routes first.
It’s really not. If you have a route down your WireGuard interface, it’s not going to stop should the end point of that interface no longer be routable.
Presumably this is an OS-level thing, that it helpfully tries to fall back? If so, I suppose it could be mitigated either by a software control that prevents using the bare connection, or by running the VPN elsewhere, for example on your router.
(I see posts elsewhere in the thread now describing how to do this with iptables.)
what I heard some naugthy people say is that one way to deal with these kind of issues is to have two virtual machines, one that connects to the vpn and a second one (without host networking) that can only access internet through the first one.
the firewall on the gateway vpc has to be on drop by default and only forward traffic from the 2nd vpc to the vpn interface.
the gateway vpc should NEVER provide DHCP or DNS to the second vpc since this is the easiest way to shoot your foot off since a single dns request may give your identity away.
Even if you did that, if I have oversight of the network in a country, like a 5+ eyes level of oversight and you were in a 5+ eyes country, I'd still be able to see where you are and what you would be doing.
Road & Rail networks have a lot in common with digital networks, when you think about it.
Personally I'd have a machine in a foreign country and used that either via an app or friend in front of the machine to do the download(s) with a time delay to avoid obvious links and then sneak it back across the border in bits.
Insecure home networks can be useful, and there is no limit to the number of times and algo's that can be used to encrypt files Russian Doll style.
Foreign country's which do not data share or extradite have their uses, but media like news orgs, Youtube and others can be helpful for establishing what hackers have been extradited. Assume all country's have hackers attacking the US or some other country and then look for missing news stories, YT videos and that sort of thing.
Then look into what relations are like between the two country's and go from there.
If you do your research or homework there shouldnt be any risks.
i use policy routing and let wireguard mark packets it wants to send out. the main table is empty (there is no route... at all..) external connections insert routes into their own tables. the wireguard interface does this too and any packet not marked by wireguard will use this routing table... if wireguard is missing nothing marks packets to leave at external interfaces. i have an additional rule that prohibits any traffic that did not have a route in any table applicable at the end of the policy rules
Kill-switches are a misnomer. What you really want is a firewall external from your "target" system that blackholes all traffic not to the VPN gateways IP address. Than make sure packets traveling the parent interface are ONLY VPN traffic or nothing in case of the tunnel dropping (wireshark is good here.)
Are killswitches actually fast enough? Serious question, I don’t know much/anything about networking internals.
I never trust killswitches and when I want to ensure I don’t leak anything, I bind to the VPN interface instead, but I don’t know if that actually gives better security?
I've never used surfshark myself but they advertise as having a killswitch inlcluded in their VPN [0] so it sounds like in this case, it definitely failed.
Just setup your routes/firewall so that the only possible way traffic can leave your machine is via the VPN device. VPN dies? no traffic going anywhere. Pretty much networking 101 stuff.
I'm skeptical. Maybe the FBI did use some illegal tool or more likely, a legal tool that they control the backoffice for, but the story I read outlined some pretty common blunders. This makes all the more sense if he was a mono-technical web app guy for most of his career.
If he did pay with paypal, that was his first mistake. Theoretically, you pay for your VPN with a gift card you bought with cash, maybe even one you got in another state and use anonymous email receiver to get your keys. Rotate accounts as project phases are completed.
Theoretically, use a purpose-built machine to keep fingerprints out of the mix, and as others have said, a killswitch using iptables/UFW to only allow traffic through the VPN gateway. Probably wouldn't have hurt to rotated the VPN egress points as well.
Theoretically, he should have been connecting this VPN over a public hotspot not his home network. I get that this wouldn't be conducive to downloading gigs and gigs of data, but if you're the one setting it up and monitoring/junking any logging on the other side, you could also afford to do this with an SBC (or several, again bought with cash) concealed at some location(s) that just pulls the information by slow drip over public connection via VPN. SBCs get tossed into Willamette as they're rotated.
The FBI cyber guys are good and creative but with a reactive situation like this I think old-fashioned gumshoeing, interviews and subpoenas would get them a long, long way. Once they had an inkling that it was an inside job, man, just relentlessly picking away at that with some face time would be really productive. Guaranteed that guy wasn't prepared for the bright light of honest incidental questioning, over and over, much less focused questioning. Not to mention, they are free to lie to your face in those interviews about what they do/don't know. Once they had a list of four people, start shoving out the subpoenas and see what clicks.
Part of their toolbox is also having an idea of what real attacks look like, knowing what those actors care about leaving behind or not concealing, and in the absence of those earmarks, they know they're looking at something "unusual". Now if I start telling you that I did find evidence of an attack of type Y where there absolutely is none, and you're all too eager to help me prove that theory, that's probably a bad sign. Did he have a plan of his own in place to frame some other ransom toolkit or plant seeds of a breach? I mean, what would you think if you walked into a company that got hit with a massive ransom demand, evidence of data theft, but no typical signs of a data breach from the usual suspects? State actor? Now this is serious. Was it their Exchange or RDP server in the closet? Oh, you're cloud-only? What platforms are we talking about? A state actor has zero-days into a major cloud platform?? MS? AWS? Now this is really serious. Or... maybe none of that is the case. Log files are all missing? Who made that decision? I mean, on and on it goes, but it seems pretty easy to see it unravelling once you start pulling on a thread.
If one was really interested, they could probably find some good information in PACER about the information that supported the indictment. Chances are he's already confessed and is attempting to plead out. They'll surely throw the book at him. Insider ransom jobs on US hardware companies are not a tolerable phenomenon.
pfsense does that by default. If the VPN connection is deemed to be down (it does some pinging to evaluate it) it silently defaults to unencrypted.
There is a convoluted way to configure it so that it blocks VPN IP packets from going directly to the WAN, but I noticed it seems to terminate connections when some packet is lost.
VPNs are not intended to mask illegal behavior. The assumption is no one will care enough to try to get the real IP. So blatantly breaking the law throws that out the window.
It's an operating system issue. OS's are really difficult to give absolute invariants that you can trust. If network connectivity security is vital, then the only real solution is to setup your environment in a VPN or on a separate box that will lose the adapter routing info if it goes down.
I’d be willing to bet that for most of the VPNs that are getting advertised by YouTubers (NordVPN, SurfShark, ExpressVPN, PIA, et al) it’s 100% marketing and they don’t actually care whether their “kill switch” works 100% of the time.
After all, they are not as trivial to implement as it sounds.
Something I don't understand is that he executed it fairly well... With the exception of using a weak, evidently broken, vpn instead of something like mullvad + tor. If you are going to do something like that (which is already a big "why???"), why put such little effort into your own security?
That option isn't enough, it works only when the VPN connection is up. If your internet connection is flaky and lose connectivity to the VPN itself, your OS will revert to using its default gateway, and your home IP, which is how the guy got caught.
You need your firewall to block any internet access when the VPN is down.
I have something like that set up in a Docker container for my torrenting VPN system, so I never connect with my residential IP.
That's incorrect. If you lose access to the ExpressVPN VPN connection then (while the VPN software is active) you _completely lose access to the internet_, until you disable a switch in the software. You get network errors and the like. at least with the ExpressVPN tool.
This is a proprietary application, not just using the OS-integrated VPN software. Given your comment, I imagine it sets up firewalls.
I don't use ExpressVPN and apparently the guy with Surfshark thought his VPN functioned like this but it didn't. When in doubt, write the firewall rules yourself, especially if you're going criminal.
> Investigators say they were able to tie the downloads to Sharp and his work-issued laptop because his Internet connection briefly failed on several occasions while he was downloading the Ubiquiti data. Those outages were enough to prevent Sharp’s Surfshark VPN connection from functioning properly — thus exposing his Internet address as the source of the downloads.
On Linux, use network namespaces https://www.wireguard.com/netns/ . Create a separate VPN namespace and have wgN be the only non-loopback interface there, then run your application in that namespace. This also solves WebRTC-style leaks. Physical isolation is even better (e.g. using a spare Raspberry Pi).
use tor. it's specifically designed to avoid traffic leaks (as long as you don't open an external application). I trust that far more than whatever "killswitch" VPN providers have, or properly implementing a home rolled solution with iptables/network namespaces/raspberry pis. the "bouncing your traffic across 3 servers to obfuscate tracking" is a nice bonus as well.
So few people are actually using Tor that correlation-based traffic analysis has very good odds of revealing identities: get the list of employees, pick out those whose connections have accessed Tor at the time of attacks, and you'll have a very short list of suspects.
where do you get tor usage data? The only time I heard of it being used was when someone used tor on some university's wifi network to send a bomb threat. In that case it would be fairly easy to get the data, but if it's just a random guy using his home internet connection, can you get their ISP to cooperate? do they even keep such data around?
Many of the tor relays are operated by intelligence agencies (this is known, and obvious otherwise).
If you connect to one of those, your IP is logged somewhere forever. These same agencies, depending on your country, can probably know who that IP belonged to at the time instantly, or quickly enough.
If the agency has a wiretap on the ISP's fiber, then they'll know even if you're not connected to one of their relays (you could use the unlisted bridges, but they probably know those anyway)
Getting the metadata is not a problem if you're the FBI (like in the article). You can get it directly from ISPs, or collect it yourself 24/7/365 in bulk and search it when needed.
That wouldn't solve this problem. The VPN issue was a 'killswitch' mode that was turned off, or didn't work 100% of the time. You could encounter the same issue with Tor. They didn't have their IP leak via webrtc or some special protocol, it was the VPN app itself with the bug.
>The VPN issue was a 'killswitch' mode that was turned off, or didn't work 100% of the time. You could encounter the same issue with Tor.
No you won't, because tor isn't a VPN. In fact it specifically tells you not to use it with other browsers/applications[1]. It's a combo of a browser + tor client. The browser has its proxy set to the tor client, so the only way it can reach the internet is via tor. Getting that to behave properly is far easier/reliable than trying to get it to work for every application/os/hardware configuration.
they were able to infiltrate various NGOs (that host tor nodes) and/or set up fake NGOs, but they weren't able to infiltrate and/or set up fake VPN providers?
This has nothing to do with a company-supplied VPN. This person was an employee using a third party VPN specifically to hide his identity from Ubiquiti.
License plate from car and if there are cams back along the route from the house, they can follow it back home. Also cellphone pings while you are sitting there, fingerprints, facial recognition, local logs in the router, etc.
Hopefully this gets upvoted more but it somewhat repairs my view of Ubiquiti's brand now that more details have come out about what actually happened. I hope the courts will determine the full extent of the truth
What's the better set-it-and-forget-it alternative? The Dream Machine Pro has been truly wonderful for me. I'm not interested in the equivalent of "a totally customizable Linux box running pfSense".
Then no thanks. I have a finite number of hours on this planet and I have no interest in spending more of them trying to configure network equipment with commands like
Ubiquiti does a great job of having good defaults out of the box, a straightforward UI, and remote management. I could walk my mother through setting up Unifi equipment over the phone and even get so far as to grant remote access to me without ever leaving it in an insecure state. I wouldn't get past setting an IP address in the Mikrotik section before losing that.
> Ubiquiti does a great job of having good defaults out of the box, a straightforward UI, and remote management.
Which UI? The UDM has two. Mobile devices have another. Some features are only available on one of the UIs, and when the feature is available on both, it often behaves differently. Sounds pretty straightforward to me.
I ended up buying a Protectli box (FW6E) with OPNsense preinstalled. It's been fantastic, and blows Unifi out of the water.
Thanks for the suggestion. Do you have any recommendations like Protectli, but for for the wireless AP?
The fragmentation about Ubiquiti devices drives me crazy - I have the "wrong" consumer wifi AP or the "wrong" (pro)consumer router (take your pick), so half of the wifi AP functionality is disabled - but for no reasonable reason - just they have two (more?) product lines that don't work together and it's hard to realize until you get the products. Overall I've been pretty disappointed by my Ubiquiti hardware - I feel like it was advertised as higher performance and better functionality than it ended up being.
I was mostly trolling to see if anyone would jump in with an actual alternative because I also wanted to know. I used to buy ubiquity pcie radios to use in mikrotik boards before ubiquity came out with their own complete devices.
Although There is a web ui for mikrotik too, ubiquity’s was definitely slicker.
You can assume that from 99% of all device vendors. It is a shame but it is a reality. That is not something I would put on Ubiquti especially considering all their other serious flaws in product management.
My $50 TP-Link router came with a card in the box explaining their GPL compliance and giving me a link to their changes. I would think if I'm going to blow hundreds on Ubiquiti gear, they could at least match TP-Link when it comes to actually following the law.
Side note: Free suggestion for a new startup. Make indictments pretty! What is it with all these fonts? Looks they really type this on a typewriter. Are all
court clerks just frustrated novelists?
As someone who worked (still do) with the legal community and hundreds of lawyers, they are the hardest to change in order to modernize. It's slowly changing, but the older generation would use a bottle of ink and a feather to write if you gave them that.
I'm not from the US, but have met folks from the US. Essentially, the law firms change and digitize, the older institutions such as the courts, will be the last as they are full of people that are much older (judges) than the general workforce, and to them, there is no incentive to change what works, and what is comfortable to them.
Also, they are trained to argue. So you can't just go in there and easily change their mind.
I'm waiting for them to follow the footsteps of recipe writers. "It was a frosty December morning, much like the ones I spent with my grandfather up north in my childhood" Just show me the deposition!
If convicted, I hope this guy spends a long time in prison - what an incredible ass.
As a long time user of ubiquiti devices, I’m glad this was the actual story. It actually makes me feel a lot better since this kind of risk is extremely hard to defend against and unrelated to their hardware.
Makes me wonder if (at least some of) the posts dunking at the company leadership and the engineering in various comments around the internet had also been him.
I think there's a fair chance that whatever someone's mental/emotional state, if a workplace could motivate someone to act as extremely as he did, there are others similarly disenfranchised.
> So... do the ubiquity things work again without being tied to their 'cloud'?
Ubiquity works without their "cloud" if you install their management software on your computer (I use a VM on my server). In my experience, you don't even have to run their management software once you have the network configured. If you are paranoid, install their s/w on a VM, set up your network, and then shut down the VM. You should bring up the management VM periodically for software updates, but otherwise it runs fine.
Note that this also has the chance to be a security issue, the management software can run nightly software updates on your AP's, possibly closing flaws before your "periodical" manual efforts.
Given how often we've seen people shooting the messenger in our field, we probably should save our commentary until the outcome of the trial.
A lot of corporate leadership are extremely paranoid and that's why whistle blowing is such a dangerous activity.
How strong do we all feel most digital evidence gathered really is? How much faith do we have in the technical knowledge of the investigators? Or the courts to parse this type of evidence?
I wonder why the developer had access to so many resources on AWS and GitHub?
Can’t these excessive permissions be removed?
Why it was undetected for such long time?
As a "cloud lead" equivalent I make sure I have as a little access as possible and all my (and everyone else's) actions are logged in an (as much as possible) immutable way. And if anyone managed to log into any AWS account with root credentials (MFA token stored in a safe) we get alerts in GuardDuty, Slack, and email within a couple of minutes.
AWS provides all the tools to do this and it does not take that much work to implement. There is zero excuse for a company to allow cowboy shit like this.
If you wanted to though, I’m sure you could figure out a way to get around this to exfiltrate data and simulate a “hack”. If Snowden could leak NSA data, what hope do you have of securing your company’s data from a nefarious person in a leadership position?
Is it possible, technically yes, but the level of paranoia and mistrust required to prevent this kind of thing is never going to be supported by leadership or other engineers trying to do their jobs.
Should you lock up your root key effectively like you describe? Absolutely. Should you do other things to restrict access to sensitive data? Absolutely. But whatever you do, you’re not going to be able to avoid a sophisticated internal attacker without making normal work extremely difficult.
As a conman scumbag cloud wouldn't you make sure at have a much access as possible and have none of the actions (and maybe nobody else for extra confusion) logged in (if forced too in a super ephemeral). Root credentials would be stored only in yellow sticky notes.
Accounts and actions like this are easily managed in AWS GuardDuty since they were so foreign from the user benchmarks. Outside of the normal security standards you'd expect of next level monitoring from companies such as CrowdStrike.
It took Ubiquiti weeks to notice these issues and he used the AWS root account, this account should be actively secured and alerted for abuse using AWS GuardDuty or similar.
I've made my own top level comment raising where I have more questions than answers from this announcement.
It reports to anyone you want, typically though you would point it to an SNS topic which is ingested by any other service, group or end point you wish.
> Investigators say they were able to [subvert the attacker’s VPN] because his Internet connection briefly failed on several occasions while he was downloading the Ubiquiti data. Those outages were enough to expose his real address.
Ahem, how convenient! Call me a paranoid Internet-forum dwelling cyber-loon, but that smells an awful lot like parallel construction.
When the authorities log the start and end times of every TCP session at both ends they don’t need a VPN leak to correlate traffic corresponding to “GET /secrets” from the client with a response from the server.
It feels like a disgruntled and sophisticated Ubiquiti employee is the last person who get caught out by a DNS leak while waiting for their VPN to come back up after a flap.
On the other hand, I guess if you’re crazy enough to behave this criminally, you can be forgiven at least for not thinking straight in terms of opsec.
That doesn't sound paranoid to me, especially as the US government has relied on the old "he made a mistake with his VPN" explanation before (against a target that is much better funded):
I'm generally quite suspicious of lying law enforcement covering up evidence collected via illegal and inadmissible methods, but I doubt that's the case here. Rather than use Tor he decided to just wing it and use his personal consumer VPN that was paid for with his own PayPal account. He used his own credentials for the "hack" and didn't even come up with an excuse for how that could have plausibly happened other than "lol it was AWS that got hacked".
>It feels like a disgruntled and sophisticated Ubiquiti employee is the last person who get caught out by a DNS leak while waiting for their VPN to come back up after a flap.
If it was An-Cheng or Stig I'd agree but given a lot of what Ubiquiti puts out... the bar isn't all that high. Given all of the dumb moves we know he made, it doesn't really surprise me that he would screw up guarding against a VPN dropout. It wouldn't even surprise me if Surfshark screwed up blocking while reconnecting, especially if it was DNS I can see someone making a boneheaded decision to switch name resolution back to the local network while trying to reconnect to lookup the address of the VPN endpoint. It's a consumer VPN, they only really care about hiding from DMCA notices and evading geo-blocks.
Now the Silk Road arrest of Ross Ulbricht on the other hand, that was a travesty of justice. Not that I think he was innocent or shouldn't be in prison, just that in a perfect world many of the prosecutors and federal agents involved would be in a cell beside him.
If anyone is looking for the alternative to Ubiquiti since their fall from grace in recent years, I've found it to be HP Aruba. I always use more open source stuff for personal projects, but Aruba Instant On is what I commonly recommend/integrate for other people, whereas it used to be Ubiquiti. Solid design across hardware and software. It finds that unique balance in quality/usability between cheap/unreliable, and overcomplicated enterprise.
Most pro's I watch (like: Lawrence Systems) don't like the UDM all that much due to lack of features. While you get a single pane-of-glass for managing the Wifi and Router, having some other product for a router tends to be better (PFSense or OpnSense). I've been running both for for 6+ years now and like it.
The only thing I've ever wished the UDM could do was change it's WAN MAC address, because I wanted to keep the same IP I'd had for years when I got it. No particular reason other than sheer nerdiness, but I got over it.
Other than that, I've never wanted it to do something that it can't do.
If he wanted 20 btc all he had to do was put up one of those shitty livestream YouTube scam videos that you always see (Elon Musk was a 2020 favorite). And he would not have been arrested either.
Teleshopping except with fake crypto giveaways telling people to send in cryptocurrency to get more back, usually done from hacked accounts. Surprisingly successfull.
Good summary of the whole saga by Crosstalk youtube channel which covers mostly Ubiquiti: https://www.youtube.com/watch?v=paLm0tP5GbI
reply