Jump to content

Do companies you know or work for organise courses for their employees to prevent social engineering?


Boccraft

Recommended Posts

In the organisation I work for there is an optional training on the topic, otherwise it is occasionally mentioned in the mandatory annual safety training, which otherwise mainly covers things like fire safety. I would be interested to know if this is the norm or not. 

Link to comment
Share on other sites

Where I work, we do occasional phishing tests, and users who fail will receive mandatory training. It's a quick course through workday, I think it takes like 30 minutes to complete. 

  • Like 2
Link to comment
Share on other sites

There's a lot of organizations that push some kind of mandatory training for 'failing' a phishing test.

It's an -incredible- way for them to eliminate trust in the org's security stance and to make workers feel unsafe when carrying out their jobs - especially when the bait is introduced in such a way to bypass the extant and -deeply- annoying "protections" [things like notifications that something's from an external sender, URI filtering, and other measures that alter the function of email within the org] that are ostensibly there to keep you safe from phishing.

Adding a petty punishment to the scenario causes either depression or resentment - it disempowers the user by forcing them through a mandatory "ha ha we caught you" workflow that just repeats the same imprecations over and over again without empowering the user in -any- way.

Security's function is to prevent, as much as possible, phishing emails from showing up in the first place - and mitigating the effects, should they manage to get through. If someone falls for a phish, this is an opportunity for the security org to learn about why their defenses were able to be bypassed, and discover ways to alter the workflow to make that less likely to occur in the future, in partnership with the user who was affected.

Shaming the user means they are less likely to report phishes, and much more likely to distrust actual communications from the org.

[ I may have some strong opinions on this matter, lol. ]

  • Like 3
Link to comment
Share on other sites

Yep, same here. Fail an internal phishing campaign & you have to go through a mandatory training.
 

Not only that, but individuals are ranked based on how well they do. The more you fail, the lower your score, & then the more you’re targeted. 

I’m largely okay with the first part. It’s a quick training & doesn’t feel like busy work. The second part is where I’m iffy on.

  • Sad 1
Link to comment
Share on other sites

Currently my org has a mandatory security awareness course that has to be completed within your first 30 days. And for all other employees a required security (physical and information) training to be completed by all employees annually. We're in a pretty regulated space with requirements for annual training. No phishing tests currently, but I think our SOC is eyeing that up in '23.

Link to comment
Share on other sites

We have a mandatory annual phishing training CBT that has gotten significantly better through the years through a lot of solicited feedback. We do not punish users for failing any internal phishing tests, but rather use the metrics to alter and improve the training offered.

Edited by synackbar
  • Like 1
Link to comment
Share on other sites

31 minutes ago, aliasenv said:

Yep, same here. Fail an internal phishing campaign & you have to go through a mandatory training.
 

Not only that, but individuals are ranked based on how well they do. The more you fail, the lower your score, & then the more you’re targeted. 

I’m largely okay with the first part. It’s a quick training & doesn’t feel like busy work. The second part is where I’m iffy on.

The latter sounds terrible! 😨 With such behaviour on the part of IT, I can see @munin's point.

  • Like 1
Link to comment
Share on other sites

I led our security awareness program for years as part of my remit. The biggest improvement we saw was when the content felt relevant to people in the look and feel. People don't want to see generic content, they want content that represents what they do day to day or what they experience and hear (company terminology). We hired a video production company to film in our offices, we had business representatives describe in detail how people work day to day so we can write scripts to depict it (with all our corporate lingo), and lastly we have our IR team prioritize scenarios they see most often. This was actually cheaper than buying pre-made content from any vendor. Cost $200k in vendor cost and maybe $50k in people's time to do this to cover 400k staff. Return on Investment was off the charts. We refresh the content every 12 to 24 months. The team has a lot of fun working in this.

When I started, our phishing simulations (basic stuff) would get up to 20% click rates (doh!!!). After 4 years, even complex stuff using company logos was knocked down to under 5% click rate. It wasn't just awareness training that got us there, but it was a big contributor. We made a game out it, report a phishing email, get entered into a raffle. One of my proudest moment was when the CEO sent an email to 60k staff about a company wide contest and it included a typo in the first sentence so people started reporting it as spam / phishing 😅

I also want to reinforce that our program never tries to put blame on users or overly punishes them. We want people to learn, not get beat up. We continually improve detection and filtering to not have to rely on users, but realize some things do get through. Security awareness plays an important role in an overall enterprise security program, but is only complementary to other processes and capabilities (such as segregation of duties, access control, endpoint lockdown, detection, response, etc).

 

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

1 hour ago, doctor_tran said:

We made a game out it, report a phishing email, get entered into a raffle.

I love the idea of using positive reinforcement to get everyone in the boat! 😁

Link to comment
Share on other sites

I'm working extensively on creating a security awareness program at my company - it's pretty new for us compared to what we had previously (annual compliance training and terrible phish testing conducted by a third party with over-punitive consequences). The program's still fairly young but we have a few different methods of training and engaging employees on education topics like social engineering. We have annual compliance training, but we also reinforce learning on several security topics by maintaining a security awareness portal that's updated monthly with learning topics (with prizes and recognition for participation), posting on intranet social media (e.g., Yammer), publishing learning scenarios for teams' discussion, and allowing managers to book Infosec staff to speak to their team about a relevant topic (which we try to make short and very interactive).

We still do phish testing as well, which I know can be controversial - but we conduct it in-house now and I think if you take a carrot-not-stick approach to it, it can be useful. We also have a handful of topics we won't use as tests (no 'collect your holiday bonus!' phishing emails at the end of the year). There are red flags guides that are displayed when a user fails a test, so they can learn right away what they should've spotted. We'll also do short video training for users that fail multiple tests by entering credentials.  If they continue to fail, they'll have learning-oriented (not punitive) 1:1 training with a member of the security team. We haven't had to take any further action beyond this, as once users get the 1:1 training they've passed every subsequent test.

There's a bit more I want to implement as our program evolves, but positive reinforcement and content that is engaging and interactive has been really successful so far. Having prizes, incentives, and just recognition has helped a lot, and we've had lots of employees engaging with the extra voluntary content. Doing annual training only is just a checkbox for staff and compliance alike, and imo really isn't enough to reinforce key concepts like recognizing social engineering.

Edited by Halcyon
  • Like 2
Link to comment
Share on other sites

Just now, Halcyon said:

We'll also do short video training for users that fail multiple tests by entering credentials. 

As a side note, I would personally recommend having something not-video as an option, for those of us who don't do so great absorbing video content. At the very least, accurate subtitles being available will go a long way. 🙂

  • Like 1
Link to comment
Share on other sites

A few years ago when I set up and ran a few phishing campaigns for a small firm. I found the most value in phishing test to be focusing on making sure staff understands how to report phishing attempts. Are they forwarding to an inbox, do you have a mailbox specific tool? Click-rates were to variable to ever extract anything meaningful from them, but making sure staff knows where to report these things to and having processes in place to handle them provided a decent amount of value.

Then tying this back to your security awareness training. Making it clear that security wants you to report anything you find suspicious and that we'll action anything you send us.

  • Like 1
Link to comment
Share on other sites

My biggest advice on running phishing simulations is to not focus on getting clicks. Focus creating moments of reflection. It would be easy for anyone to create a phishing simulation that gets high click rates. The art of it is to understand where your user population is and create a phishing simulation that is right at that level of understanding versus unknown. You want to create a moment where someone slows down to think. Making a phish email too easy doesn't work; making one too hard doesn't either. You want to play in that zone where they think "something doesn't seem right, let me look at this more carefully". In the moment, it doesn't matter whether they end up clicking or not. The click rate just helps give you feedback on where to go as a next step. 

You adjust this threshold based on previous results and the type of education / training you provide. The metric of success is a trend, not a specific target. 

 

  • Like 3
Link to comment
Share on other sites

In my organization there is not any training in that matter.

There is however, as many of you have already stated, some phishing tests from time to time. I don't know the criteria to choose the targets or if there is any consequences for those who take the bait.

Link to comment
Share on other sites

1 hour ago, doctor_tran said:

My biggest advice on running phishing simulations is to not focus on getting clicks. Focus creating moments of reflection. It would be easy for anyone to create a phishing simulation that gets high click rates. The art of it is to understand where your user population is and create a phishing simulation that is right at that level of understanding versus unknown. You want to create a moment where someone slows down to think. Making a phish email too easy doesn't work; making one too hard doesn't either. You want to play in that zone where they think "something doesn't seem right, let me look at this more carefully". In the moment, it doesn't matter whether they end up click or not. The click rate just helps give you feedback on where to go as a next step. 

You adjust this threshold based on previous results and the type of education / training you provide. The metric of success is a trend, not a specific target. 

 

Great point!! As with any learning success, the cognitive task level must be right, not too easy and not too difficult. Thank you for highlighting this! 😃

Now I would of course be interested to know whether you send different emails to newcomers in the organisation than to old hands in your learning programmes?  😏 Or is it too much effort? 

Edited by Boccraft
Link to comment
Share on other sites

1 hour ago, Boccraft said:

Great point!! As with any learning success, the cognitive task level must be right, not too easy and not too difficult. Thank you for highlighting this! 😃

Now I would of course be interested to know whether you send different emails to newcomers in the organisation than to old hands in your learning programmes?  😏 Or is it too much effort? 

Generally no to different simulations on new hires because the company wide phishing simulations are low or medium difficulty. New hire onboarding has the security awareness training content. We also avoid (not always) including things in phishing simulations that are not covered in training from last 2 years. Not fair to test against something you never taught/communicated. We only use high difficulty scenarios for niche groups where we know they've had extra training or are likely trargetted (sysadmins, security team, executive assistants, etc) or a team leader asks us to tailor something. We've gotten special requests from HR, finance, and the team that manages all the executive assistants. We try to avoid broad high difficulty simulations as I didn't feel it was productive. We vary the topics and techniques enough where it doesn't matter if someone is new or not. If we see successive high click rates on medium difficulty, we may look at adjusting training and awareness content and dial down difficulty to see what happens. Target is for 4 simulations a year, but some business units do more frequently. 

Examples of how we categorize low, medium, and high (not hard set criteria, relies on professional judgement of awareness team, corporate comms, and threat Intel team)

- high: somewhat targetted. use of correct logo, terminology, current events. No obvious signs something is wrong (no typos, URLs at first glance looks correct, correct logos, etc). Example: we created a fake LinkedIn email that went to our cyber team (IR, threat Intel, defense, etc) where it baited them to click on a link to see and like the latest post from our CISO. That was a fun one requested by the team's leader. 

- medium: no use of correct or high quality logo's, 1 instance of incorrect use of company terminology or description, messaging that scares or creates a sense of urgency. Example: fake e-commerce notification that their credit card starting with "37" was setup for a recurring charge and to dispute, click on the link. For those in the know, every corporate American Express card starts with "37". This one created so much noise and i got egg on my face. Big lesson learned on my part, we should have given our Amex account exec a heads up. We gave our internal help desk a heads up but forgot to tell Amex. Amex though our account was hacked when there was a massive flood of calls 😓😓😓

- low: more than 1 sign it's a phishing email such as no use of any correct company assets, suspcious looking URLs, typos, or spam like lures. Example: dodgy looking email from the "security team" warning the user that they visited a website they shouldn't and they need to click on the link to explain what they were doing. I got an angry call from an executive about this one. The day before, he had a cert issue on a website and he remembered that when the email came through. Instead of calling the help desk (as covered in training), he stared at the email for 45 minutes before ultimately clicking it and seeing it was a phishing test. When we talked, I just let him vent about how he wasted his time. Ultimately he realized his own mistake (he knew something wasn't right and should have called for help). 

  • Like 1
Link to comment
Share on other sites

I got sent a phishing test that said I needed to "Change my password". This was at the same time I was getting onboarded to my new company, and got 10+ other legitimate "You need to change your password" emails from all of the internal services. Me acuelpa, I clicked on it, but definitely felt very salty about having to do another bit of training for it. Especially when pretty much every email coming from out of org has big ol' headers saying so, but not the phishing test email.

  • Sad 3
Link to comment
Share on other sites

10 hours ago, Boccraft said:

The latter sounds terrible! 😨 With such behaviour on the part of IT, I can see @munin's point.

Definitely not the best setup 😅 I remember when I first joined the org, there was a great developer on my team who was near retirement. They were constantly going through retraining & I felt awful. 

  • Sad 1
Link to comment
Share on other sites

Burnt-out CISO/1-Man-InfoSec-Dept checking in:

Pro-tip if you're in charge of implementing these things: try to borrow your marketing department.  Not only is "get ideas to stick in people's heads" literally their job description, but the tools they can bring to the table for running phishing tests make the infosec stuff look like Fisher Price.  Some of their analytics get terrifying.  And, in my experience, all marketing people secretly want to make comically-blatant propaganda posters or run guerrilla marketing campaigns; so if you can give them an excuse, a few key points you want to get across, and corporate approval to do it, they'll run with it and do most of the work for you.

What I found most effective in terms of education was teaching people the psychological "tricks" phishers use to try to short circuit their brains.  Things like urgency, authority, and familiarity.  Get an email with a "RE:..." subject from someone claiming to be from [regulatory body] that you've never talked to saying that if you don't log into some website by 5PM you'll get sanctioned and fined?  Yeah, you should be suspicious of that, regardless of how good the spelling and grammar are.  Way too many legit emails have typos and spelling errors to use those as "tells".  If you've never looked through someone in "Accounts Payable"'s inbox, do it (ask them first, even if you don't technically have to; don't be a creep).  The number of "Sub: Invoice #xxxxxx; Msg: See attached" emails with a freaking .rtf inside a .zip they get that are completely legitimate is horrifying!  Like, literally >90% of the actual phish I saw looked less sketchy than a significant portion of the legitimate emails people got; and these weren't small companies, UPS Freight was one of the worst offenders... one of the more reliable "tells" for a UPS Freight phish was that it didn't look sketchy enough.

Which brings us to phishing tests.  Use real phish.  Go through your SPAM filters and phish reports and find real phishing attempts that have been directed at your org; strip out all of their links / trackers / payloads and replace them with your own.  If you want to teach someone to ride a bike, you put 'em on a bike.  You give 'em a helmet, cover them in pads, and maybe put some training wheels on to make sure they don't hurt themselves too badly, but there's just no substitute for the real thing.  By their very nature, these are going to vary in detection difficulty from "How did this get past the filters?  Everything okay over there, security?" to "gun to my head, I would have sworn that was legitimate".  That's fine.  Because finding out if people will click things isn't the primary thing you want to be testing.  They will.  Put the right bait in front of you and you will click things.  I have clicked things and I will almost certainly click things again.  Some people will click things because they think you're watching and they're messing with you.  People will click things because they think it's a phish and want to see what happens (annoyingly, this is most prevalent in tech departments; the exact places where creds or a detonation are most likely to be damaging).  What you want to test is the chain of events that happen after that.  Do they realize "something's not right"?  Do they tell anyone?  Do they tell the right people?  Do those people respond the way they're supposed to?  If they dropped creds, what could have been done with them?  If they opened a maldoc, could it have detonated?  Then all of this (and anything else you can think of) gets rolled back into your security / support / training programs.

One key here is that clicking the thing, or even entering creds, isn't automatically a failure state.  Sure, it's less than ideal; but if they're immediately on the phone with whoever they're supposed to get on the phone with, that's still a partial pass for them and they probably don't need any additional training.  Just the self-knowledge that they got taken will naturally cause them to up their alertness.  A quick "hey, yeah, you clicked the thing and you entered your creds, but then you noticed something wasn't quite right and did everything we've asked you to do if that happens; so, good job!  Any questions?  How'd [whatever team they're supposed to contact] treat you?" follow up is enough.  It's the people who click the thing, enter the creds, and then don't say shit that you need to have chats with... and there's a decent chance, if it's the first time, the reason they didn't say anything was because they were afraid of getting in trouble; which is when you let them know that "we don't do that here", that they're not going to get in trouble for admitting to a mistake, but that you will rain down bloody hellfire upon them if you catch them trying to cover one up.

Another key is that users aren't the only ones you're testing with this.  You're also testing your technical controls and your service teams' people and processes.  Okay, somebody entered creds or detonated a payload; let's clone that account's permissions and access and hand it off to the Red Team, see what they could do with that.  Is there any way we could limit that?  Did all the alerts that should have triggered actually trigger?  Did automations do what they were supposed to?  Somebody called [wrong place] to report they got phished; did the people at [wrong place] know where [right place] was?  Did they connect them directly to [right place] or did they relay a message?  If it was a relay, did [right place] follow up and contact the person?  How did [right place] eventually handle it?  Was it handled appropriately (e.g. did they isolate the host / segment if they opened a maldoc or roll creds if they entered them)?  How long did it take?  Did the people at [wrong place] or [right place] act like pricks and make the person reporting less likely to do it in the future?  (To be clear, that's bad.  You want people to be comfortable admitting they might have messed up)  Basically, this is the important stuff and users are the RNG / "designated casualties" in a war game... they just don't know it and, you know, you actually shot them in order to simulate them getting shot... but it's cool, you made sure not to hit anything important.

So, this has gotten long, but some final thoughts on phishing tests.  There are lots of debates about using potentially sensitive pretexts (e.g. "Holiday Bonus!", "Click here to keep your job!", etc.).  Going back to my "use real phish" thing, if real phishers are using them against your org, you should be "testing" with them.  BUT, you need to be careful with how you do it; don't just go blasting them out.  Send out a companywide announcement that you are seeing these pretexts, include examples that are close but not exact to the phish you intend to use (realistically, if your org is getting targeted with these sort of phish, you'll have multiple variant samples to choose from, use one as your phish and pull examples from the others).  Wait several hours, send your "holiday bonus" phish, wait 5-10 minutes, send out a "Hey, you know those phish we warned about a couple hours ago?  Yeah, it looks like a batch slipped through our filters, sorry about that.  If you got one, please report it." email, include examples directly from your phish.  The point for these isn't so much to actually test anyone, it's to inform and then reinforce that "yeah, those are a real thing that we are being targeted with".  If you're lucky, the report rate on that is going to be through the roof and nobody will actually fall for it (there's a decent chance you'll have a few people "fall for it", knowing it's you, just to mess with you).  If someone does, well, then you get to figure out a "professional" way of saying "Yo, WTF?!  Did you not see the warnings both above and below that message about that exact thing?  Read our f***kin' emails!"

  • Like 5
  • Thanks 2
Link to comment
Share on other sites

23 hours ago, Boccraft said:

In the organisation I work for there is an optional training on the topic, otherwise it is occasionally mentioned in the mandatory annual safety training, which otherwise mainly covers things like fire safety. I would be interested to know if this is the norm or not. 

Nope we have documents that no one reads for that.

Link to comment
Share on other sites

Mandatory trainings and phishing tests at my work. I have no idea if there is an additional training for people who fail the phishing test, I always reported the phishing attempts.

  • Like 2
Link to comment
Share on other sites

3 hours ago, Chauke said:

Nope we have documents that no one reads for that.

Reminds me of what someone said at today's security training: "We don't need guidelines if no ones read them!" 😅

Link to comment
Share on other sites

I've given a talk four times to my organization on social engineering at various internal events. 
They want me to record it since its led to an uptick in reported phishing and suspicious events.
Learning about social engineering and how to protect yourself is far more fun when you see real examples of cyber members having impersonated the CEO or getting past physical security than when its lumped into annual trainings (especially if you're like my industry where you have an obnoxious amount of annual trainings to get through for compliance reasons). 
Plus I include memes and gifs. That's the best way to get people's attention. 
We also do phishing tests periodically. 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...