The Tribal CISO

Throughout my career I have been through more “Leadership” or “Managerial” training than I can remember, from the lead by example style when I was in the military to the corporate leadership (aka managerial) style that has more of a scientific approach. I have seen many styles come and go, and there are certainly no shortage of articles and trends that are published on a daily basis. Many times those of us who have been through the drill enough know what works and what doesn’t — in the words of Kenny Rogers when to hold them and when to fold them.

We tend to focus on the results we have achieved in the past with a given scenario, learning from our mistakes and ensuring we highlight successful efforts.  In my observations we tend to do the same thing when it comes to implementing various frameworks whether it’s ISO, NIST, CoBIT, FAIR, ITIL, CERT-RMM, Diamond Model or Octave. You name it there is certainly a framework for it. Some people pluck the goodness from multiple frameworks and create their own; others will kneel to the altar of the chosen framework and swear allegiance to it for all time.

Leadership and management styles or skills can be viewed in much the same manner as there is always an interesting conversation when you ask someone the difference between leadership versus management, leading versus directing, mentorship versus oversight. The most glaring difference, however, is that one styles “Leadership” as more of a social mechanism and “management” as more of tools for your toolbox.

James Altucher published an article on the 10 things he thinks you should know in order to become a great leader, and there is a section that particularly caught my eye. Specifically he states:

Below 30 people, an organization is a tribe. 70,000 years ago, if a tribe got bigger than 30 people there’s evidence it would split into two tribes. A tribe is like a family. With a family you learn personally who to trust and who not to trust. You learn to care for their individual problems. You know everything about the people in your tribe. At 30 people, a leader spends time with each person in the tribe and knows how to listen to their issues. From 30-150 people you might not know everyone. But you know OF everyone. You know you can trust Jill because Jack tells you can trust Jill and you trust Jack. After 150 people you can’t keep track of everyone. It’s impossible. But this is where humans split off from every other species.

We united with each other by telling stories. We told stories of nationalism, religion, sports, money, products, better, great, BEST! If two people believe in the same story they might be thousands of miles apart and total strangers but they still have a sense they can trust each other. A LEADER TELLS A VISIONARY STORY. We are delivering the best service because…. We are helping people in unique ways because…. We have the best designs because…. We treat people better because…. A good story, like any story ever told, starts with a problem, goes through the painful process of solving the problem, and has a solution that is better than anything ever seen before. First you listened to people, then you took care of people, but now you unite people under a vision they believe in and trust and bond with.

How does this relate to the CISO role or anything else for that matter?

In my humble opinion, this topic and where you fall in it will decide if you will build and/or operate a successful cybersecurity program. Over the years I have built and run multiple teams performing all kinds of functions and not just in the technology space, but also in the military, emergency response, heck, even running a kitchen staff when I was in high school, and — success or failure — it always felt “right.”

Here’s why. As Mr. Altucher defined so well, I have a tribal leadership style and as I think back in time as I write this I have set up my cybersecurity programs both past and present in the tribal manner, but never really defined it that way until now. In business terminology, in each instance upon walking through the door for a new organization I have always assessed the landscape of the cybersecurity products, services, programs and projects. Usually reorganizing employees and operations to be collaborative, efficient, and effective. However, in another view I was also organizing the cybersecurity program into multiple tribes.

These tribes sat together, supported one another, collaborated together, gave and received advice and supported each other’s decision. They received mentorship as well as the vision for the tribe on what mission success should look like.  I backup my tribes and they back me up, always seeking out facts and making sure everyone’s covered.

For those of you with military or police and fire types of background, you can certainly relate to what I am talking about. When you think about this concept and observe your own current corporate culture, are you tribal? Are the functional teams supporting one another, giving and receiving advice and collaborating freely? Are you backing your tribes up and are you backing them up?

If not here are some advisory tidbits I would recommend:

  1. View your leadership style through a social aspect. Treat your management style as tools for your tool box. Do not treat your tribes as tools.
  2. Do you differentiate between program and projects? Programs have outcomes and projects have outputs. I lead my tribes as a program and want a successful outcome. Therefore, my tribes don’t have milestones or deadlines; they have only mission success or not.
  3. Keep your tribes small and focused. I commonly use the term “high speed and low drag.” This supports organizational resilience. When you’re breached and need to pivot, this is the optimum way; empire building does not mean success.
  4. Do not build your tribes solely around a standard or framework. If you focus solely on industry standards or cybersecurity frameworks you will fail. Build your tribes based on outcomes and whatever means mission success in your organization. Do not try and build a tribe into columns, rows, and cells.
  5. Be willing to change. If you are in your workspace as you read this and as you survey the landscape around you it feels like a scene from the movie Office Space, you should reflect on that for a few minutes and maybe think about some ways to change it.
  6. Observe the below simple diagram:
    1. It is not a top down org chart; it is a tribal “system.”
    2. Each tribe would have its own products and services they would be responsible for as well as the mission goals and outcomes.
    3. From an operations standpoint you are leading an ecosystem with an environment that changes every day, hundreds of times a day. Define what “normal” looks like and observe and react when something “abnormal” occurs.

Tribal_CISO

What Can We Learn About Social Engineering From Impersonation?

With organizations losing billions of dollars due to business email compromise scams and thousands of employees having their W-2 information sent to criminals each week, it can be easy to think, “How can people be so dumb and keep falling for these same tricks?”

When it comes to socially engineering an employee, most people think of email phishing
— and last week we discussed some ways to defend against those threats — but I think the best way to truly understand those cyber threats is to first remove those technology aspects and look at one of the oldest cons around: impersonation.

I love a good impersonation story. Don a disguise. Create a good backstory. Trick some people into doing something they shouldn’t.

It makes for great drama.

Unsurprisingly, when researching how businesses are being compromised by social engineers, nearly all of my favorite examples involved the tactic. Impersonation stories are important because they highlight how simple and effective techniques can be used to lead to a major compromise at an organization.

For example, Christopher Hadnagy, CEO of Social Engineer, Inc., recounted on our social engineering podcast how two ticket-less fans were able to watch the Super Bowl from $25,000 seats by sneaking into the event with a group of first aid workers and then simply acting “super confident.”

Likewise, Chris Blow, a senior advisor at Rook Security, likes to pretend to be an exterminator to test a company’s security. In one instance, he was thwarted by a well-trained receptionist who noticed the con; however, all he had to do was drive around back and find more “helpful” employees — who then let him into sensitive areas where he could access a variety of valuable information.

They Literally Handed Him Their Money

My favorite social engineering story occurred decades before email became popular and everyone learned of the term “phishing.” It was done by conman turned FBI consultant Frank Abagnale, who claims to have duped dozens of individuals into handing him their businesses’ money simply by posing as a security guard.

As the story goes, Abagnale noticed how car rental companies would deposit their money in an airport drop box each night, so he bought a security guard outfit and put a sign over the drop box saying “Out of service, place deposits with security guard on duty.”

According to his autobiography, he stood there amazed as people handed him a total of $62,800.

You may hear that story and wonder why all of those people would trust some random guy with a sign. But is that any different than the cybersecurity pros today who are dumbfounded when a person gives their password to an “IT guy” over the phone? Or when an employee hands over their credentials because an email told them to do so?

Simple, effective scams work, have always worked, and when done in person by a skilled social engineer, can be even more effective.

Defending Against Social Engineering

What can we learn from these impersonators?

For one, social engineering is very effective, which is why the FBI and others are warning of a dramatic increase in business email compromise (BEC) scams. From October 2013 through February 2016, from just this one type of social engineering, there were more than $2.3 billion in losses across 17,600 victims.

Scam artists understand precisely how easy it can be to dupe people, and the same techniques are used in social engineering via phishing and phone. The story above is one of my favorites because Abagnale combines three of those common tactics in one scam: a simple backstory, appearing as though he belongs, and projecting authority.

  1. A Simple Backstory — Whether in person, over the phone or via email, scammers will have all sorts of stories that prey on people’s desire to help. Those handling sensitive information such as W-2 information should always be skeptical about who and why they are sharing that information, but that is often not enough. Having clear policies for employees to fall back and procedures for sharing sensitive information on can help ensure an employee does not get duped due to their desire to be helpful.
  2. Appearing as Though They Belong — As the FBI noted in a BEC warning, it’s important to know the habits of customers, coworkers and vendors and to beware of any significant changes. A person may appear as though they belong by impersonating those who have legitimate access. In some BEC attacks, the malicious actors compromised email accounts and waited for weeks or months to learn the communication habits before attempting their scam. Employees should be encouraged to report any suspicious activity and be continuously trained so that the front line of defense is armed to look out for the latest and most relevant social engineering threats.
  3. Projecting Authority The impersonation of authority figures is a large reason for the billions of dollars being lost to these social engineering scams. Just because a call or email appears to come from the CEO or other figure, be wary of any attempts to disclose data or gain access. Authenticating important requests through several channels such as both email and phone can help to prevent many social engineering attempts.

People want to be helpful. They tend to trust others. Good social engineers exploit those tendencies. The influx of technology has only expanded the reach of scam artists; the techniques remain the same. If an organization and its employees understand why social engineering works, then it’s much easier to combat some of those common tactics and keep the business safe.

Social Engineering – Security’s Big Problem and How to Fight Back

Pick any recent data breach. It could be a high-profile one or one of the many that never make national headlines. If we were to follow the string of events back to the beginning of that compromise, what would we find?

Chances are, it’s an employee getting duped by a cybercriminal.

In fact, one could make the case that social engineering is the single biggest issue facing organizations when it comes to cybersecurity. No matter how big of a fortress you build, all it takes is one employee to open the gate and let the bad guys walk into the heart of a business.

One of my favorite cartoons sums up the issue facing businesses:

Source: John Klossner

With all of the recent W-2 breaches in the news this year, I’ve been thinking once again about the issue of social engineering. What can businesses do? It seems every article I read only points out the problem and then makes vague references to “awareness.”

In 2015 SurfWatch Labs interviewed a variety of people to try to get to the heart of that question, and I think it’s a good idea to revisit that conversation eight months later. After all, it is a problem that will never go away.

Essentially, everyone agrees that a three-pronged approach is the key to limiting the success of cybercriminals using social engineering tactics:

  1. Use technology and tools to limit the exposure to social engineering
  2. Train employees so those social engineering attempts that do get through are less successful
  3. Realize that even the best trained organizations aren’t perfect, so have tools and a response plan in place to limit the potential damage

Let’s briefly expand on the first two points about prevention.

Limiting Exposure to Social Engineering

Technology is getting better at limiting users’ exposure. Take email as an example. In 2006 about 30 percent of an average Hotmail user’s inbox was spam — a huge problem. By 2012 that number was down to 3 percent. In July 2015, Google released its latest numbers, and less than 0.1 percent of the average Gmail inbox was spam.

The less malicious activity that gets through an organization, the less potential there is for an employee to make a mistake. There are several ways an organization can go about this goal, as have been outlined by many groups and organizations dedicated to fighting social engineering such as the Anti-Phishing Working Group.

Some best practices specific to phishing include:

  • Filtering and endpoint technologies – Filtering technologies are great at catching high-volume, low customization spam. Endpoint solutions can also combat things like malicious attachments.
  • Blocking images, links, and attachments – Disabling images and links in emails from untrusted senders can help users identify legitimate emails and prevent employees from clicking malicious links. Disabling Microsoft Office macros from Internet-obtained documents can help block a common attack vector that has led to many recent data breaches.
  • Web traffic filtering – There are many websites that are known to steal user credentials. These phishing websites are often collected into lists by both commercial vendors and free services like PhishTank. Blocking access to these sites can limit the opportunity for users to fall victim to social engineering.

Some other areas that can be useful in preventing social engineering include:

  • Authentication – Malicious actors will often impersonate others outside of email, so it is important to have strong ways to authenticate users.
  • Physical security – Physical security limits the ability for unauthorized individuals to access areas, eavesdrop on conversations, and use baiting (like dropping a malware-loaded USB stick). The organization should have effective physical security controls such as visitor logs, escort requirements, and background checks.

Training Employees and Raising Awareness

Even with security technology in place, employees will still make mistakes. Security company RSA learned this in 2011 when a phishing email targeting four low-level employees was caught by a filter and placed in their junk folders; however, one of the employees enticed by headline — “2011 Recruitment plan.xls” — retrieved it from the folder and opened the attachment, leading to a compromise that cost the company $66.3 million.

That is why training and awareness is often touted as the most important and cost effective step in combating social engineering. According to the 2016 Verizon Data Breach Investigations Report, 30% of phishing messages were opened and 12% went on to click the malicious attachment. And in 2016 phishing is on the rise, according to SurfWatch Labs data. Additionally, a recent Ponemon Institute study examining six proof of concept studies found that phishing training led to employee click rates being reduced between 26-99%.

This lead Ponemon to conclude, “Assuming a net improvement of 47.75%, we estimate a cost savings of $1.80 million or $188.40 per employee [for the average organization].”

Some of the do’s and don’ts of a good security training program include:

Social engineering is one of the biggest cyber threats facing organizations; however, many businesses devote relatively few resources to addressing this problem. Implementing  technology and tools to limit the exposure to social engineering and training employees may be the most cost effective way for many organizations to significantly improve their cyber risk.

Does Your Cyber Risk Strategy Pass the Penny Test?

As cyber incidents proliferate, security experts continue to stress the importance of cyber risk strategy starting at the top of organizations. However, a recent report surveying more than 1,500 non-executive directors, C-level executives, Chief Information Officers, and Chief Information Security Officers found that some organizations still have a big knowledge gap when it comes to cyber threats.

According to The Accountability Gap: Cybersecurity & Building a Culture of Responsibility:

  1.  Only 10% of high vulnerable respondents agree that they are regularly updated about pertinent cybersecurity threats
  2. More than 90% of high vulnerable board members say they can’t interpret a cybersecurity report
  3. Only 9% of high vulnerable board members said their systems were regularly updated in response to new cyberthreats

Many of these organizations are concerned about potential cybercrime. All of them are likely doing something to combat cyber risks. But they’re not getting updated on important threats, they cannot understand the updates that do come through, and as a result they do nothing.

That led me to wonder if we’ve all gotten stuck in the same methods of looking at the same things in the same way day after day without ever taking a breath and a step back and asking, “Wait, why am I doing this?”

The Penny Test

There was a fascinating story on the news awhile back about people getting wrongfully convicted based on faulty eyewitness testimony.

In fact, according to the Innocence Project, “Eyewitness misidentification is the single greatest cause of wrongful convictions nationwide, playing a role in 72% of convictions overturned through DNA testing.”

However, the point wasn’t that eyewitnesses are being careless or that they are just plain ignorant, it’s that without having the whole picture — the complete context of the situation — it’s natural to make a simple mistake that can cost a person decades of his or her life.

To illustrate, let’s do a variation of the Penny Test using a six person “lineup” to see if you can identify the “real” penny.

Which penny is correct?

If you’re like most people, you’ll eliminate a few possibilities, narrowing it down to a couple of choices. Then, over time — and along with other factors that may reinforce your decision — you grow more certain that, yes, that penny you’ve chosen is definitely the right one.

But here’s the problem with the story I’ve given you: it’s incomplete. I failed to mention the possibility that the correct version of the penny might not be there at all.

That’s one of the problems with the human mind, it wants to pick something, and it’s one of the many problems that can arise from eyewitness identification.

All of the pennies were wrong.

Cybersecurity Blind Spots

That lack of context can also be a real problem when it comes to managing cyber risk. Without having the whole picture, it’s natural to invest in the wrong areas or to make a mistake that leaves an organization vulnerable to cyber-attack.

This is what many of the recent studies and surveys have been reinforcing. The IT team is wasting their time elbows deep in low-level data and investigating red flags, never having a chance to think about or act on a high-level strategy. Executives don’t even know what aspects of their company are at risk, so they’re fumbling around in the dark and relying on vendors for the answers.

The problem with that? They’re biased.

Just as the cops in the world of traditional crime may lead a subject towards a certain perpetrator (“We thought it may have been number three too.”), a vendor may lead you towards their biases — regardless of the true risk profile and needs of your business.

When you’re assessing cyber risk, remember that one option is always “none of the above.” The answer might be something else entirely.

Understanding Complete Context

Many organizations have these cyber blind spots. For example, most organizations don’t assess the security of third-party partners or their supply chain, yet we’ve seen dozens of data breaches that begin from these very avenues.

If relevant cyber threat information is available, it often doesn’t make its way to those with the ability to actually make changes. And if it does get passed along, those executives may be unable to interpret the technical language of the threats. And if they do know and understand the threats, it may end up that those threats are no longer as relevant; there may be newer, more pressing cyber risks.

That’s why nearly every cybersecurity best practice guide or cyber risk management program beings with the same thing: context. Clear away as many of those blinds spots as possible.

Remember the Penny Test. Just because you are doing something doesn’t mean it’s the best use of resources. The real threat might still be out there, and without having complete context around your cyber risks, you may miss it.

Sharing is Caring – Threat Intel for You and Your Business Partners

As kids we’re taught to share our toys. It’s a hard lesson to “get.”

When it comes to cybersecurity and information sharing, many still don’t “get” it. Liability concerns, competitive disadvantages, and so on. But even if some of these concerns are legitimate, this lesson really shouldn’t be so hard.

According to the latest Verizon DBIR, while compromises are happening faster, the time to discover the compromise is taking longer than in previous years. We can combat this challenge through the use of sound threat intelligence and sharing among “friends.” Through intel you can be more prepared in advance of an attack, reducing the amount of incidents you need to respond to.

Many are trying to address this sharing problem — hence the creation of Information Sharing and Analysis Centers, aka ISACs. There are a boatload of ’em — 18 listed on Wikipedia’s page on ISACs. Each of these ISACs is specific to an industry, so in theory there is relevancy built in to the information that is shared. The intent of these ISACs is sound, and there are many good people working to make these ISACs really useful. But they have their limits as well. We all have businesses to run and support after all.

So how do we take the ISAC concept up a notch, where the intel being shared is more than relevant, but SPECIFIC to your business? Privatize the ISAC to fit your own business ecosystem. This means pulling in your partners and suppliers. You should already be sharing information with them anyway, just include cyber as part of it.

Whether you are a big, medium or small business, most likely you have partners and suppliers that are an extension of your cyber footprint. They typically have some level of access to to your network, applications and data. Having these intersecting points allows business to run more efficiently. But with these intersections comes risk. A company’s suppliers are often integral to their business — I need X and Y to fulfill Z, and X comes from a supplier. Suppliers that don’t pay enough attention to security ultimately can cause a very direct and painful impact on your business (Target is the obvious supply chain cyber example used often, but there are plenty more where that came from).

As opposed to sharing information with folks you don’t know (and let’s be honest, how much do you want to really expose to a wider audience not within your control?), your own supply chain is, for all intents and purposes, just an extension of your own enterprise. It only makes sense that your security “umbrella” should extend out a bit over them as well.

As such, sharing info, analysis and expertise within your “extended family” can be very valuable to establishing the kind of early warning system that is the promise of cyber information sharing to begin with — and without most of the risks.

Sharing threat intelligence, risk identification and other analysis with your partners helps you help yourself. Cybercriminals work together and share information all the time in Dark Web forums and even sometimes out in the open.

Sharing is caring. And the group of folks that you will get the most value out of sharing cyber threat intelligence with are the companies in your supply chain.

“Actionable” Information vs. Practical Cyber Threat Intelligence

I am a practical guy. I don’t like to waste a lot of time and tend to gravitate to things that work, whether I originally thought up the idea or if someone else did. I’m of the “if it works then it works” mantra. Much of that attitude stems from joining the military and being thrust into a culture that demands outside-the-box-thinking. Assess the problem and work through scenarios, use past experience and lessons learned, use the right tool for the right job and lastly, be mission oriented.

When it comes to cyber threat intelligence (CTI), the key value can be unlocked by making it practical. What are the answers to the “so what” questions? Why would anyone want to spend budget on this? CISOs and like roles have a lot of headaches. How does this help that headache? How do I make this stuff useful to decision makers? Who are the decision makers? Why would they care?

The problem is the value from CTI is being misrepresented. What I’ve noticed is that there is an overwhelming drum beat towards tools — tools that will sprinkle pixie dust over your threats and make things “actionable.” But getting an avalanche of data is not the same as evaluated intelligence — and yet they get confused way too often.

Information is raw and unfiltered. Intelligence is organized and distilled. Intelligence is analyzed, evaluated and interpreted by experts. Information is pulled together from as many places as physically possible (creating an unnecessary and unrealistic workload for any analyst team to organize, distill, evaluate, etc.), and may be misleading or create lots of false positives. Intelligence is accurate, timely and relevant.

The reality is that “actionable” really just means a new alert/alarm/event that you now have to whack-a-mole. In some of the presentations I’ve given I’ve talked about the “actionable, actionating, actionator.” Sounds ridiculous right? That’s the point. But this is more common than it should be. And because of this teams are getting dragged away from productive efforts and into areas that are less productive.

This should not be surprising as many of the CTI vendors are tool builders, and no surprise, they push tools to solve the problem. However, here is where I will deviate, my background is that of a CISO, Program Manager, Team Builder. I am seeing a big disconnect between threats that are present in our industries and the practical application of resources — combination of people, process and technology — to reduce the likelihood of those threats from becoming a reality.

You see there’s a big difference between security tools and programs. Security tools (or feeds) are bolt-on and output-driven while security programs encompass people, process and technology … and they are OUTCOME-driven.

Threat intelligence should be outcome-driven vs. output driven. In my previous role as a CISO, I wanted and needed to know about threats that were specific to my organization. I needed to know what capability, opportunity and intent those threat actors had, along with a plan to ensure we were well-positioned before an event occurred (and in case we were not ready, that we had an effective plan in place as we moved from event to incident to breached).

So as you look at the many “threat intelligence” options out there, ask yourself this: will this intel drive the organization to make the right decisions and take the right actions?

Don’t try to bite off more than you can chew and start simple by focusing on evaluated intelligence. From there make your risks learnable by separating out random (or un-analyzed) risks from what is more likely so you can reduce your uncertainty — and then tie those learnable risks to the characteristics of your business.

Dark Web Insights: Misconceptions About the Dark Web

The Dark Web is often misunderstood. For the unfamiliar, it is often viewed as either a mysterious place full of technological gurus communicating via primitive interfaces or else something akin to the Wild West — a no-holds-barred free-for-all of dangerous and illicit activity. 

However, neither is the case.

The most popular marketplaces, where everything from stolen identities and credit cards to drugs and weapons are for sale, are more reminiscent of popular e-commerce sites than of the shady, backdoor dealings one may expect from criminals. Buying stolen accounts and intellectual property — as well as exploit kits, hacking-for-hire services, and the infrastructure to distribute malware is actually quite simple.

This reality runs contrary to much of the media coverage around the Dark Web. Stories such as the 2013 take down of the infamous Silk Road marketplace tend to focus on the scary aspects of “hidden” websites or scandalous details such as the Silk Road’s murder-for-hire plot — ignoring the fact that most people with an hour of free time and a few Google searches can easily find these sites and purchase illicit goods and services if desired.

In this series of blog posts, SurfWatch Labs hopes to shine on light on various aspects of the Dark Web, starting with what the Dark Web actually is — and what it isn’t.

1. Most Dark Web Markets are Customer Friendly

Those new to the Dark Web are often surprised by the level of customer service and the ease of which fraudulent goods and services can be obtained. However, this makes sense given the fact there are many competing marketplaces on the Dark Web. Customers and sellers are going to gravitate towards markets that appear the safest and have the best features.

AlphaBay is among the most popular and established Dark Web marketplaces (Nucleus Market, another popular marketplace, recently went offline). These marketplaces try to emulate the features seen on popular e-commerce sites such as Amazon or eBay.

AlphaBayMarket_edited.png
PayPal accounts for sale on AlphaBay

Some of these features include:

  1. Easy Navigation – Items are categorized into high-level categories such as fraud with subcategories like accounts, credit cards, personal information, data dumps and others.
  2. Vendor and Trust Levels – Sellers often have ratings. In the case of AlphaBay there is both a “Vendor Level,” which is based on number of sales and amount sold, and a “Trust Level,” which is based on the level of activity within the community as well as feedback from users.
  3. Feedback and Refunds – Buyers can also see feedback from customers and often have the option of returns or replacements such as credit card numbers that may no longer work due to being reported stolen.

Although these Dark Web markets tend to not be discoverable through Google and often require special software such as the Tor browser in order to access, they do want users to find and use them — so they are easy to locate, search for goods or services and make purchases.

2. They’re Concerned About Security and Trust

Most people know the old adage “there is no honor among thieves,” and these illicit markets work hard to help assuage those fears. This begins at the customer level with ratings and reviews.

AlphaBayFeedback_edited.png
Seller ratings on AlphaBay Market appear similar to the ratings on eBay. The system includes independent ratings for stealth, quality and value of the product; the total number positive, negative and neutral ratings over set periods; and text reviews from previous customers about their purchases.

These features help to establish trust when buying things like malware and stolen credit cards. Through ratings and feedback the community can collectively judge whether the items for sale can be used for legitimate fraud and attacks – or if they are just a scam.

In fact, these markets are actively trying to combat spammers and other bad actors just like e-commerce sites on the surface web. In March AlphaBay announced that they were rolling out mandatory two-factor authentication. As Motherboard’s headline ironically noted, “Some Dark Web Markets Have Better User Security than Gmail, Instagram.”

“We now enforce mandatory 2FA (two-factor authentication) for all vendors,” read the AlphaBay announcement. “This is part of an increasing effort to stop phishing on the marketplace. We recommend that everyone uses 2FA for more security.”

In addition, many markets try to avoid coming to the attention of law enforcement. Following the November 2015 terrorist attacks in Paris, which killed 130 people, Nucleus Market posted this message on its homepage:

Nucleaus_Weapons.png
Message posted on Nucleus Market stating they would now longer allow the sale of weapons.

The decision came just a week after the shootings and news reports that the guns used in the attacks may have been acquired from the Dark Web. Likewise, although child pornography is prevalent on the Dark Web, most of the markets do not sell it alongside the drugs, counterfeit goods and other illegal stolen items because that would attract unwanted attention to them and their user base.

Some Dark Web markets combat the the influx of law enforcement and researchers by requiring a referral in order to gain access. Others only show items that are for sale to established users or require authorization from the seller to view details about the product. This can make it harder for agents posing as “new customers” to monitor activity, and it helps to increase the trust factor around those marketplaces and forums.

3. No, the Dark Web is Not That Massive

In the summer of 2015, two researchers set an automated scanning tool loose on the Tor Network in an effort to find vulnerabilities on Dark Web sites. After just three hours the scan was over and they’d uncovered a little more than 7,000 sites.

A more recent effort to index the Dark Web put that number at close to 30,000 sites — a sizeable amount, but still far less than the massive underground world many have described.

As Wired wrote last year, the number of people on the Dark Web is quite small:

The Tor Project claims that only 1.5 percent of overall traffic on its anonymity network is to do with hidden sites, and that 2 million people per day use Tor in total. In short, the number of people visiting the dark web is a fraction of overall Tor users, the majority of whom are likely just using it to protect their regular browsing habits. Not only are dark web visitors a drop in the bucket of Tor users, they are a spec of dust in the galaxy of total Internet users.

4. It’s a Valuable Source of Threat Intelligence

The Dark Web is a valuable place to gather threat intelligence. SurfWatch Labs threat intelligence analysts proved that recently when they uncovered a breach into web hosting provider Invision Power Services.

That’s not to say everyone should jump on the Dark Web and poke around. It is easy to stumble across illegal things such as child pornography, and without the proper precautions companies or individuals may end up infecting their computers or putting themselves on the radar of cybercriminal groups — making themselves a potential target. However, what better way is there to understand the current threat landscape and the motivations of these malicious actors than to see for yourself what they are talking about, what they are selling, and if your company — or anyone in your supply chain — is being mentioned.

The Dark Web isn’t the cybersecurity cure-all that some companies make it out to be, but it is a significant part of a complete threat intelligence operation. Without visibility into these markets and the active threats they contain, your organization is operating at a disadvantage.

Why Do People Hate Passwords?

The password: love it or loathe it, this concept and practice have been a cornerstone of basic security for a long time. After covering cybercrime for the last few years, I have come to the conclusion that people hate passwords.

Let’s examine that – why do people hate passwords?

“I think people hate passwords because it’s something else to remember – and something else to forget,” said Aaron Bay, Chief Analyst for SurfWatch Labs. “The need to protect ourselves, and our information, has snowballed into this large, terrible thing we have in place now. Hardware and software have been developed to combat it, but there is still the problem of now someone else is in control of your access.”

Bay points to the compromise of the RSA’s SecureID and the recent vulnerability found in the password management program KeePass to further explain the complications of passwords.

“In 2011, the RSA SecureID was compromised, and the thousands of organizations – including the U.S. Government – that relied on their tokens were now at risk. The password manager KeePass recently had a flaw discovered that allowed attackers to steal passwords directly from the database. These are two examples where these beneficial systems have failed. It is safe to say that these systems, and others, will fail again at some point in the future.”

Without using programs to help with the process of utilizing strong passwords, the practice can be daunting. Listeners of the SurfWatch Cyber Risk Roundup who are familiar with our “Funny Story of the Week” have heard us talk about bad password practices. While some of the most common passwords are viewed in a humorous nature – “123456” tops the charts every year – there is a real security concern with this trend.

The Password Reuse Problem

The main problem is one of volume. Websites, work accounts, devices, iPhone or Android apps, and even credit cards all require passwords or pins. As a result of people reusing passwords, a number of companies have made headlines for cyber incidents, despite the fact they weren’t actually breached.

  • Amazon: “We discovered a list of email addresses and passwords posted online. While the list was not Amazon-related, we know that many customers reuse their passwords on multiple websites. … We recommend that you choose a password that you have never used with any website.”
  • United Airlines: “We recently learned that an unauthorized party attempted to access your MileagePlus account with usernames and passwords obtained from a third-party source. These usernames and passwords were not obtained as a result of a United data breach and United was not the only company where attempts were made.”
  • Uber: “We investigated and found no evidence of a breach. … This is a good opportunity to remind people to use strong and unique usernames and passwords and to avoid reusing the same credentials across multiple sites and services.”
  • Dropbox: “Recent news articles claiming that Dropbox was hacked aren’t true. Your stuff is safe. The usernames and passwords referenced in these articles were stolen from unrelated services, not Dropbox. Attackers then used these stolen credentials to try to log in to sites across the internet, including Dropbox.”

“Password reuse is very common and more often than not leads to additional compromises when peoples’ passwords are exposed in the latest data breach,” Bay said, adding that each website having slightly different requirements also makes it harder for users to create unique passwords they can remember. “We not only have to remember the different passwords, when we have to change our passwords we have to remember the rules and make sure the new password doesn’t break them.”

I think everyone understands that remembering passwords can be a hassle. Some people attempt to circumvent this step and simply write the password down next to their work terminals, but that completely negates the point of a password as it is now in view for everyone to see. If you don’t think your co-workers are capable of utilizing your password for malicious purposes – as well as a practical joke – don’t be fooled. Several experts and reports have indicated that insider activity is one of the leading threats organizations face in combating cybercrime. According to SailPoint’s 7th Annual Market Pulse Survey, “1 in 5” employees share their passwords and login information with members of their team.

“Compounding the problem, 56% of respondents admitted to some level of daily password reuse for the corporate applications they access, with as many as 14% of employees using the same password across all applications,” the survey found.

Moving Beyond Passwords?

What are the alternatives to passwords? Last year, Yahoo decided to create an option for users that would allow them to log into their accounts without using a password. Instead of a password, a link would be sent via text message to a user’s phone that would validate their access.

There is also the popular topic of biometrics. In a recent example, the U.K. bank Atom launched a biometric authentication tool that utilizes a customer’s face and voice instead of a password for validation. The option to use a password still exists and the new biometric method remains as an option for customers.

Biometrics seem to be a trend around the validation process, but passwords remain the  authentication option at this time.

“Biometrics is now being regarded as ‘the next big thing’ to use to protect us,” Bay said. “When Apple introduced the fingerprint reader into the iPhone, biometrics were thrust into the public view. Millions of people, basically overnight, now had a fingerprint reader.”

Bay said the fingerprint readers do work and, for the most part, are secure.

“Is it perfect, not hardly. Is it the best we have, unsure. Is it better than many other implementations, yes, without a doubt. However, it still relies on hardware and software to be perfect. Unfortunately, history has shown that is not possible.”

Whether you like passwords or not, until a better, proven solution replaces this validation method it is imperative that your passwords are secure. This message needs to be communicated and driven home to employees – even if they hate passwords.