NewsLab
Apr 28 19:09 UTC

Cybersec is a thankless job: expanding workload and shrinking pay packet (theregister.com)

53 points|by rustoo||30 comments|Read full story on theregister.com

Comments (30)

30 shown
  1. 1. lenerdenator||context
    "Show me the incentives, and I'll show you the outcomes." - Charlie Munger

    Right now, if you have a security breach, at least in the US, you send out a letter telling the person that their data could be God-knows-where and offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless. If they've got absolutely, positively nothing better to do with their time, I guess you could file a lawsuit. Who knows what the outcome would be. Probably not in their favor.

    In other words, it's cheaper for them to overwork the InfoSec guys/gals and barely care about what is happening outside of day-to-day operations, than it is to really secure their stuff. So they don't spend that money.

    If you saw corporate valuation-cratering fines being implemented - the kind that would end the c-suite's careers and bring shame to their family lines for seven generations - I bet that they'd start catering lunches for the InfoSec team.

  2. 2. gadders||context
    New idea: AI tool to help generate legal letters to companies after they leak data to cause them maximum inconvenience.
  3. 3. lenerdenator||context
    You could also create an AI tool to help generate letters to lawmakers about how they need to make a real dent in this between reruns of Matlock in the retirement home.
  4. 4. intended||context
    The human speed legal system would become collateral damage.
  5. 5. jcgrillo||context
    I don't think fines are enough of an incentive. They're too easy to evade and insufficiently consequential to the people who are actually shipping code. Moreover, making them enormous (as you put it well "valuation-cratering") unfairly punishes people who are not directly responsible for the failure. Instead, like in other engineering disciplines, Engineers need to be personally liable for the consequences of failure. Not necessarily every engineer--not every mechanical engineer needs to be a P.E.--but someone directly responsible for the quality of the work needs to stake their reputation on it, and suffer the consequences when it fails.
  6. 6. adrianN||context
    In practice this would mean that you need to show conformance to some kind of security process. The actual outcome of that process is of secondary importance as long as you can show that you’re compliant. Very carefully written process documents _can_ improve things, but my confidence in security processes is low for companies without intrinsic motivation.

    I think one can reasonably argue that sufficiently large fines that don’t have a „but we followed iso-xyz“ loophole could produce better outcomes. The difficult part is making the companies care about existential tail risks.

  7. 7. jcgrillo||context
    Yes, it'll generate a lot of super annoying paperwork. But, hopefully, it will also tighten up software engineering standards. It has worked well in other disciplines.
  8. 8. adrianN||context
    There already are areas where such standards exist, eg safety critical applications in aviation. Arguably the defect rate there _is_ lower, but I still think that this method for achieving this is quite inefficient. And I think that writing aviation software that doesn’t crash is a lot easier to define a process for than for writing software that is difficult to hack.
  9. 9. jcgrillo||context
    The missing piece is the requirement for a certified Professional Engineer to sign off on the system. That decouples the incentives from the corporate objectives, and makes it personal. We need that kind of professional accountability in software, otherwise it'll continue to be bad.
  10. 10. adrianN||context
    It is my understanding that personal responsibility already exists in safety critical software development.
  11. 11. TheRealDunkirk||context
    Companies are already following a bunch of standards like SOX, SOC2, HIPAA, etc., and documenting their adherence to checking all of the boxes, but incidents still happen every week.
  12. 12. FireBeyond||context
    > offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless

    It's generally actively harmful, and the CRAs fight for this business from breaches because universally, to accept the free credit monitoring you have to sign up for their highest tier credit monitoring package (which can be up to $50/month), supply a credit card, and then hope to remember, a year later, to cancel at the end of the free period, because at that point they'll convert you to a paying customer.

  13. 13. xnx||context
    > "Show me the incentives, and I'll show you the outcomes." - Charlie Munger

    Also note that -like pharmaceutical companies- treatment is more profitable than cure for infosec consultants.

  14. 14. mystraline||context
    Yep. I had a chance to go for a cybersecurity degree. And every time ive looked at that, the career path is basically an applied insurance job.

    Cybersecurity does not make money. They do not raise the profit for a company. Instead, they are compliance, contractual, and legal defences to repel lawsuits and keep data boundaries clean.

    And who's the first to go? Groups that dont make money. Like cybersec.

  15. 15. blueside||context
    Cybersecurity certainly makes money. The good ones make a lot. I mean a lot.

    But if you think you can just study for a year and get some security certificates and call it a day, you're going to be sorely disappointed in the compensation.

  16. 16. czbond||context
    OP means internal to a company security ops don't generate add on revenue. Cybersec definitely adds revenue to services and provider companies.
  17. 17. giancarlostoro||context
    Just commented this elsewhere but my takes on cybersecurity today: Its about to blow up in high demand with so many skiddies being able to hack anybody with an LLM. We are seeing an increase in websites, systems and companies being compromised at an alarming rate. I suspect one of these days we will see a headline of a compromise that will shock and horrify us all. Anyone sleeping on cyber security is a ticking timebomb.

    Honestly, if you wanted to make a YC company today that targets AI in a meaningdful way, I'd say make it focused on cyber security analysis. ;)

  18. 18. evan_a_a||context
    Whenever I tell people I work in computer security, their first question is "are you worried about AI taking your job"? To which I just laugh and respond "AI is job security"
  19. 19. giancarlostoro||context
    It really is! AI will only help you if anything, you aren't worried about AI giving you bad code, just bad answers, which you would validate anyway. I think the other area where AI could be interesting, and I don't hear much buzz about it is, during outages, if it can query all online systems and logs in your cloud, it could probably triage it faster than an entire outage team could in theory anyway. Surprised nobodys built such a system yet. ;)
  20. 20. evan_a_a||context
    I mean it in the sense that AI security hype and the larger geopolitical environment has woken up a lot of people to the reality that they need to consider security. And the ones that haven't woken up yet will get a wakeup call when they are breached. It also increases the demand for real security expertise, which is already scarce.

    Also, in my niche (hardware and embedded product security), AI doesn't a have a functional impact to the work except in code analysis, but even that is difficult given the level of abstraction these systems are built at.

  21. 21. giancarlostoro||context
    That's fair, though even that could just be a matter of time, as people build tools that interface LLMs to the physical world. I wonder how something like Bus Pirate could be used with an LLM (maybe a more powerful version of it?) to grok and poke hardware all over the place.
  22. 22. evan_a_a||context
    I forsee issues with really getting use out of any commodity language model in the hardware security context, because hardware systems notoriously lack standardization. And often times, the technical knowledge (datasheets, app notes) is locked behind vendor NDAs, or straight up not documented, only existing in the minds of engineers. The implementations of said designs are similarly highly proprietary, with little public "real" systems to learn from.

    So the issue is two-fold:

    * The knowledge must be documented and accessible for training.

    * A bespoke model must be trained this documentation.

    It is unlikely that both of these things happen in the general model context. Perhaps individual chip vendors will eventually pursue this, but I suspect it is just not a priority for them.

  23. 23. debarshri||context
    I am building in cybersec space. I dont think you even need script kiddies now. Internal employees run dangerous bad ops with AI that itself is a cybersec nightmare.
  24. 24. thewebguyd||context
    > I suspect one of these days we will see a headline of a compromise that will shock and horrify us all

    But we've had the shock headlines already, and nothing changes. We've seen hospitals get hit that had real-life consequences for patients, the entirety of US citizens SSNs have been breached multiple times now. Passwords as a concept are basically obsolete now. There's even more.

    That bomb has already been going off.

    If anything I'm seeing the opposite. Companies are throwing security to the wind to go all in on AgEnTiC AI.

    If we want change irt cybersecurity, then there needs to start being real consequences for a breach. Not just free credit monitoring. The companies that are proven to be negligent should face actual financial & criminal consequences.

  25. 25. xnx||context
    Do you think that AI helps security offense more than defense? It's not obvious to me that it does.
  26. 26. a34729t||context
    With Claude writing so much of the software in big companies, Anthropic is well-positioned to eat up SAST, DAST and a lot of the supply chain analysis. EDR and proactive security are still going to be massive businesses, however.
  27. 27. fulafel||context
    The industry culture related to security work and career paths seem just f'd up.

    Instead of ensuring we build systems with robust foundations, people end up in a swamp of frustrating roles like SOC staff chasing alarms about false positives all day, peddling ineffective add-on security products, management CISO roles where you're expected to take responsibility of existing insecure Microsoft etc infrastructure without power to change things, working on demotivating compliance bureucracy that don't actually improve security.

    I'd argue work on meaningful security improvements is mostly available outside industry security roles.

  28. 28. evan_a_a||context
    The company I work for (consulting) upended the entire strategy to basically use pentests to sell managed services (XDR, NDR, SOC, vuln scanning, "continuous pentest") that does nothing to meaningfully move the needle on security. Which of course the market will buy, but it is incredibly demoralizing to see expertise sacrificed to the alter of recurring revenue.
  29. 29. everdrive||context
    Companies don't fundamentally care about cybersecurity. Most of them see cybersecurity as being similar to waste management; it's not something you get excited about. Sure, your company _must_ have a waste management plan, but it only exists out of pure necessity. It's required to do the real work of the company, but if you had a magic wand and never had to deal with it, you'd choose that option. And, like waste management, plenty of companies outsource their cybersecurity, since it's cheaper and they don't really care about it.
  30. 30. liquid_thyme||context
    Yes, you're correct. To add - companies don't fundamentally care about all the things that we like to think of as "nice things", like good design, lack of dark patterns, robust security architecture, minimizing technical debt, etc.

    If customers cared about reputational damage from cybersecurity incidents (sure.. some do) , then you would see that reflected in their priorities. Also, non-technical customers don't really know who to blame for security anyway. They'll just blame the OS vendor or other random parties even if its the Application that is not secure.