Facebook’s terrible, horrible, no good, very bad week continues. Though the social network’s “contact import” feature has been around for a very, very long time, you’ve probably forgotten about it. And if you want keep Facebook from filling in the gaps by collecting data about your friends from you—or worse, records of your call data—it’s easy to shut your devices up.
To get a look at the contacts you’ve already uploaded to Facebook, you’ll want to visit Facebook’s Manage Invites and Imported Contacts page. You might be slightly shocked to see open invites from many, many years ago still active—just a little head-nod to the viral aspects of the social network back when it was still getting off the ground (and more of a collegiate gathering ground than anything else). Feel free to delete these; who needs an invite to Facebook nowadays anyway?
This will remove unwanted data from Facebook and will likely lead to less creepy “do you know Johnny” type friend suggestions. The entire article discusses a few different places on Facebook to visit to ensure specific personal data is deleted. If you have not yet done it, I strong suggest visiting this article and following the simple outlined steps.
Ars Technica reports on something not all that surprising considering the Facebook news stories lately. This time it appears for years Facebook has been surreptitiously scraping call, and text message data from Android phones:
If you granted permission to read contacts during Facebook’s installation on Android a few versions ago—specifically before Android 4.1 (Jelly Bean)—that permission also granted Facebook access to call and message logs by default. The permission structure was changed in the Android API in version 16. But Android applications could bypass this change if they were written to earlier versions of the API, so Facebook API could continue to gain access to call and SMS data by specifying an earlier Android SDK version. Google deprecated version 4.0 of the Android API in October 2017—the point at which the latest call metadata in Facebook users’ data was found. Apple iOS has never allowed silent access to call data.
Facebook provides a way for users to purge collected contact data from their accounts, but it’s not clear if this deletes just contacts or if it also purges call and SMS metadata. After purging my contact data, my contacts and calls were still in the archive I downloaded the next day—likely because the archive was not regenerated for my new request.
As always, if you’re really concerned about privacy, you should not share address book and call-log data with any mobile application. And you may want to examine the rest of what can be found in the downloadable Facebook archive, as it includes all the advertisers that Facebook has shared your contact information with, among other things.
Utterly shameful yet entirely unsurprising for one of the most unscrupulous companies on the internet.
Everyone should know the following truism by now: if you are receiving a web-based service for free, you are not the customer but the product. Your data is being monetized, and likely collected in ways you are unaware of, therefore you should be very careful with what data you provide to the platform.
Yahoo Finance discusses why Facebook and Cambridge Analytica-like scandals paint corporate security officers in a bad light:
While the average tenure of a CISO across all industries in the U.S. is 4.5 years, according to Forrester, that stint may shorten when it comes to Silicon Valley, simply because of the nature of the work. Running cybersecurity at a high-profile business such as Facebook, which has access to 2.1 billion-plus users’ data and is constantly pushing out new features, arguably has more potential pitfalls than doing so in many other places.
“Tech companies are the tip of the spear when it comes to security,” contends Jeff Pollard, a principal analyst at Forrester. “I think being a CISO at a tech company is definitely different than being a CISO in a different industry, primarily because you’re really dealing with talented people doing bleeding-edge work.”
While running security certainly makes the CISO a potential scapegoat when push comes to shove, on a day-to-day basis, there can also be a tug-of-war between what the CISO thinks is best for the company and what other executives want. For instance, enacting stricter security measures may contradict other executives’ plans for rapid user and revenue growth — a prerequisite for many businesses to succeed, particularly in an über-competitive, fast-moving industry such as tech.
Specific to the last point, security should never prevent mission critical operations from executing. It should be a business enabler, ensuring corporate compliance with applicable industry regulations and laws, but never pitting itself against operations. Businesses and government organizations has missions to achieve, and need to be allowed to reach those goals but in a safe and secure manner.
The other thing about the CISO position is where it falls within the corporate hierarchy. Is the CISO subordinate to the CIO, where is often is placed, or does the CISO report directly to the CEO? I always advocate for the CISO and CIO being peers. They should be working in concert with each other to ensure corporate operations remain functional, but done so in the most secure manner possible.
All too often when the CISO is subordinate to the CIO, security gets the short end of the funding stick. CIO’s are more interested in operations and adding shiny new gadgets executives and general employees can appreciate, rather than the ill-perceived work-preventing security measures.
The CISO has a tough job, and as long as they have C-level and board support then they should be successful. Failure to have that support is almost guaranteed failure, especially should a breach or other security incident occur.
The Verge is reporting Twitter’s CISO, Michael Coates, is leaving the company to create his own security startup:
Twitter’s chief information security officer is leaving the company, sources familiar with the matter have told The Verge. Michael Coates, who joined the company in January 2015, is quitting to start his own company, sources said. Coates announced the move internally about three weeks ago, sources said, but had not announced the move externally.
Twitter declined to comment. Coates confirmed the move Wednesday afternoon.
News of Coates’ departure comes on the same day that Michael Zalewski, director of information security engineering at Google, announced his departure from that company after 11 years. (Zalewski was a high-ranking security executive at Google but not its chief security officer; that role belongs to Gerhard Eschelbeck, vice president of security engineering.) not And it comes two days after reports that Alex Stamos, Facebook’s chief security officer, plans to leave the company in August. The departures come at a time when tech companies are under mounting pressure to prevent their platforms from being misused by foreign governments and other bad actors ahead of the 2018 midterm elections.
There are a lot of high-level security executives leaving larger Silicon Valley companies as of late. I think of the three – Google, Facebook, and Twitter – the latter has the most interesting sounding set of challenges.
If you are a security professional interested in a new opportunity, the Twitter gig would definitely be worth looking into.
WIRED sat down with Facebook CEO Mark Zuckerberg for a Q&A about the recent Cambridge Analytica scandal and other problems related to both the company and the huge amount of personal data it collects on people:
Nicholas Thompson: You learned about the Cambridge Analytica breach in late 2015, and you got them to sign a legal document saying the Facebook data they had misappropriated had been deleted. But in the two years since, there were all kinds of stories in the press that could have made one doubt and mistrust them. Why didn’t you dig deeper to see if they had misused Facebook data?
Mark Zuckerberg: So in 2015, when we heard from journalists at The Guardian that Aleksandr Kogan seemed to have shared data with Cambridge Analytica and a few other parties, the immediate actions that we took were to ban Kogan’s app and to demand a legal certification from Kogan and all the other folks who he shared it with. We got those certifications, and Cambridge Analytica had actually told us that they actually hadn’t received raw Facebook data at all. It was some kind of derivative data, but they had deleted it and weren’t [making] any use of it.
In retrospect, though, I think that what you’re pointing out here is one of the biggest mistakes that we made. And that’s why the first action that we now need to go take is to not just rely on certifications that we’ve gotten from developers, but [we] actually need to go and do a full investigation of every single app that was operating before we had the more restrictive platform policies—that had access to a lot of data—and for any app that has any suspicious activity, we’re going to go in and do a full forensic audit. And any developer who won’t sign up for that we’re going to kick off the platform. So, yes, I think the short answer to this is that’s the step that I think we should have done for Cambridge Analytica, and we’re now going to go do it for every developer who is on the platform who had access to a large amount of data before we locked things down in 2014.
Based on my experience running web sites, I suspect Zuckerberg and Facebook had no idea data was being siphoned. They likely implemented some rate control mechanisms, but had – have – zero situational awareness of how that data is being downloaded and by what companies. They merely provide access and that is where things end.
Even if there were some rate controls put into place, just like with traditional network breaches, if the actors data exfiltration technique was to slowly trickle it out, that will be difficult to detect unless the analysts are really paying close attention. I am not saying this is what happened with Cambridge Analytica, but it is a plausible scenario for some form of Facebook corporate deniability.
If that is the case, then it is just terrible platform design. It boils down to too much of release fast mentality, without properly thinking through the implications of deploying features and capabilities. Unintended consequences are hard to fully understand in advance, but still, Facebook has an extremely talented workforce and I find it hard to believe had they slowed down and thoroughly considered their approach they could not have envisioned this type of scenario.
No matter how the data left Facebook, the company is complicit. It is their platform and they need to be more cognizant about how third-party access is being used, and to eradicate actors using it maliciously.
Bloomberg reports on Deloitte hiring EUROPOL Executive Director Rob Wainwright to run their cyber security business:
The 50-year-old MI5 veteran will join the Amsterdam-based unit in June, according to Deloitte, which shared an advanced copy of its announcement. Deloitte is planning to add 500 people to its European cyber practice to meet growing demand from corporate clients anxious to prevent hacks.
“I spent a lot of the last few years encouraging private-sector leaders to take cybersecurity more seriously, to invest more,” Wainwright said in an interview at Europol’s headquarters in The Hague on Tuesday. “So now I will go directly in there and try to help them do it myself.”
Wainwright has spent 28 years working for the U.K. government, including more than a decade at the MI5 domestic intelligence service, where he specialized in counter-terrorism and organized crime. After stints as head of the U.K. liaison bureau for Europol and running the international department of what is now called the National Crime Agency, he returned to Europol as director in 2009.
During his time at Europol, which acts as an intermediary for 1,000 global law enforcement bodies and coordinates major investigations involving terrorism and money laundering, Wainwright helped oversee a number of high-profile stings. He played a key role in last year’s takedown of AlphaBay and Hansa, dark-web markets that sold everything from drugs to hacking tools. AlphaBay was more than 10 times the size of Silk Road, which the U.S. closed in 2013.
Sounds like a major win for Deloitte and a huge hire. It will be interesting to see if Wainwright is capable of developing additional business, and strengthening existing projects, based on his expertise and experience.
CNBC reports on a positive mindset change in Japan on entrepreneurship and start-ups:
Moreover, young Japanese workers have grown up in a world where innovation is driven by the likes of Airbnb, Uber and Facebook, according to Riney. Unlike their parents’ generation, they “never saw a world where massive wealth and innovation-drivers were Sony or Nintendo or some of those more traditional folks.”
Government support has been crucial in bolstering the start-up scene, according to Riney.
The quality of entrepreneurs is also increasing as many left their jobs in consulting or banking sectors to either start their own company or join the management teams of existing start-ups, according to another investor.
“Before, I couldn’t really meet founders with certain prestigious backgrounds,” Hogil Doh, investment manager at Rakuten Ventures, told CNBC. “Now, almost 80 percent of the founders have … worked for McKinsey or Boston Consulting Group or Goldman Sachs.”
The key here is a how Japanese society is evolving to no longer view working at a startup as a failure or some kind of plan B. It used to be if you were unable to get a job at a major Japanese company then a startup was essentially the only route for you to go. Nowadays that is no longer the case, and young folks are increasingly being attracted to startups.
Personally, I think the startup culture attracts Japanese millennials moreso than being lost in an ocean of corporate drones dressed in their freshman black suits. Startups generally value capability over the Japanese time honored seniority. They are viewed as potentially better opportunities for growth, even if the work is likely more difficult and riskier than established companies.
Finding the right startup is always the tough part. On the one hand it is important to locate a company that matches your skills, while on the other hand you want to join a startup that has major growth potential and long-term stability. It is a difficult yet exciting proposition for many young folks, who are increasingly steering away from marriage and family life.
Ultimately, a Japanese resident, I am very glad to see the startup scene is finally taking off. Like with so many other things, Japan is about 15-ish years behind the rest of the world. But once that momentum is built, Japan will be hard to stop, and will become a force to be reckoned.
TechRepublic is reporting on a Securities and Exchange Commission update to a 2011 cyber security statement, stating US publicly traded companies will be required to disclose in a timely manner when they have been breached or there are material cyber security risks:
First, and most importantly, is that the SEC is essentially extending its interpretation of older disclosure rules to cover cybersecurity. If you are familiar at all with SEC disclosure guidelines under Securities Act of 1933 and the Securities Exchange act of 1934 these new guidelines won’t appear very different—the SEC even wants disclosures filed on the same forms.
As the original 2011 statement said, “although no existing disclosure requirement explicitly refers to cybersecurity risks and cyber incidents, companies nonetheless may be obligated to disclose such risks and incidents.”
What this new interpretive statement does is reinforce and expand the 2011 original, along with adding an important section designed to crack down on insiders trading stock based on undisclosed knowledge of a cyber attack—something important to consider in the wake of stock dumping accusations surrounding the Equifax breach (of which executives were later cleared in an internal investigation).
What the SEC has to say on that particular front is clear: “directors, officers, and other corporate insiders must not trade a public company’s securities while in possession of material nonpublic information, which may include knowledge regarding a significant cybersecurity incident experienced by the company.”
In other words, disclose incidents immediately to prevent even the appearance of impropriety.
It should be obvious to anyone that disclosure should be mandatory. However, most companies will act in the best interest of the officers running the company, and therefore often times will attempt to hide breaches from the public. This is harmful in so many ways that it is almost unbelievable in 2018 there are no actual legally binding requirements.
CNN has an interesting short report on Darktrace, a UK cyber security company founded by ex-MI:6 spies and mathematicians:
Instead of just building firewalls, the Darktrace Enterprise Immune System is designed to understand what the company’s normal network looks like and identify any abnormalities.
Sloan says the system behaves the same way as the body when it has the flu: “This technology is like a fever that alerts us when we have a virus and then we need to take action to treat it.”
An example of a small abnormality that the technology would pick up is if an employee logged onto the server at 10pm, without ever have done so before. It would be immediately flagged as unusual.
I have seen demonstrations of Darktrace technology, and even worked closely with the company, and believe they have a valuable product. They are a unique player in the market and one to consider.
One word of caution: although Darktrace uses AI, they are not the only player in the industry to do so. Just about every company has some form of AI built-in to their products these days. So take the whole “we use AI” with a grain of salt since it is no longer a niche idea.
Disclaimer: I work for McAfee, potentially a Darktrace competitor.
Reuters on how well Darkmatter, a United Arab Emirates company, is progressing in the highly competitive cyber security industry:
Darkmatter was founded three years ago in Abu Dhabi, the UAE capital, by CEO Faisal al-Bannai, an Emirati entrepreneur known for setting up regional mobile phone retailer Axiom Telecom.
The majority of its work, 80 percent, is with the UAE government and related entities, which has included advising federal cyber security agency National Electronic Security Authority (NESA).
That has helped the company, with ambitions to globally compete in the cyber sphere with IBM and Lockheed Martin, to double its revenue each year.
“Today, we’re talking about hundreds of millions of dollars,” Bannai told Reuters at Darkmatter’s Abu Dhabi headquarters.
A UAE government representative was not available to comment on the claims.
Darkmatter has gone on a recruiting spree since it started in late 2014, and has more than tripled its workforce to 650. It has hired executives who have worked at major international companies such as Intel Corporation and BlackBerry, but also some with backgrounds in Western military and intelligence agencies including the U.S. National Security Agency (NSA).
A desire to compete against the likes of IBM and Lockheed Martin is quite an ambitious goal. Hearing about cyber security companies based in UAE is almost unheard of. But what makes this industry so exciting is companies can begin almost anywhere and grow to become global powerhouses.
The UK’s cybersecurity agency has issued a warning to government departments on the potential risks of using Russian antivirus or security software because of fears the Kremlin could use it to conduct espionage.
The advice from the National Cyber Security Centre comes as Russian cybersecurity firm Kaspersky Lab is facing accusations that its software helped with the theft of NSA hacking tools on behalf of the Russian government.
Kaspersky Lab has denied any wrongdoing and CEO Eugene Kaspersky has said he’d remove his company from Moscow if the Kremlin asked it to carry out spying.
The National Cyber Security Centre (NCSC) has warned that Russian cyberattacks are a threat to the UK and that the Russian government could potentially compromise Russian software deployed within organisations for its own ends.
In the case of Skyhigh, McAfee – whose legacy business is in endpoint security – is specifically acquiring the company for that cloud expertise.
“Skyhigh Networks had the foresight five years ago to realize that cybersecurity for cloud environments could not be an impediment to, or afterthought of, cloud adoption,” Chris Young, CEO of McAfee, said in a statement.
“They pioneered an entirely new product category called cloud access security broker that analysts describe as one of the fastest growing areas of information security investments of the last five years – where Skyhigh continues to innovate and lead. Skyhigh’s leadership in cloud security, combined with McAfee’s security portfolio strength, will set the company apart in helping organizations operate freely and securely to reach their full potential.”
“McAfee will provide global scale to further accelerate Skyhigh’s growth, with the combined company providing leading technologies and solutions across cloud and endpoint security – categories Skyhigh and McAfee respectively helped create, and the two architectural control points for enterprise security.”
It ought to be interesting to see how McAfee integrates Skyhigh solutions into their existing product portfolio.
As Tesla’s popularity and usage continues to rise, it will start to become a much more attractive target for malicious actors. Especially since Tesla leverages extensive use of the internet for car-to-cloud connectivity, bad guys will try to find a vulnerability to exploit:
An often-asserted downside of internet-connected vehicles is that they’re subject to various forms of hacking, including theft. On Wednesday, a Norwegian security company called Promon claimed to have found something like the Holy Grail of vehicle hacking—by compromising a Tesla owner’s Android phone, they could take control of Tesla’s mobile app and steal the car.
The hack relies on tricking a Tesla owner into downloading a malicious app, for instance through a spoofed public Wi-Fi hotspot that would direct them to a deceptive Google Play download. That app could then escalate permissions on the owner’s phone and corrupt the Tesla app. Attackers could then, according to Promon, communicate with the Tesla server to issue remote commands including locating the victim’s car, opening its doors, and enabling keyless driving.
Toronto-based Equibit Development Corporation said in a press release today that McAfee has been hired as the company’s chief security officer. In a somewhat unusual arrangement, however, McAfee will be reporting to the board and not the CEO.
“We’re honored and thrilled to be working with John McAfee,” said Equibit CEO Chris Horlacher in a statement. “With his input and ongoing guidance, EDC will continue to set the security standard for blockchain services. We share his unwavering commitment to IT security and, with his help, will continue to push the boundaries of what’s possible in this industry.”
Equibit is a security service for safely issuing shares in companies and protecting trades from hacks, using decentralized blockchain technology. The platform will also handle other shareholder services, like voting and registering new stock owners.
Blockchain is a new security frontier, and is slowly starting to see adoption in the financial sector.
Cybersecurity requires agile improvement and dedicated resources. Unfortunately, instead of businesses taking the recommended holistic approach, cybersecurity is often critically overlooked. An infographic based on a study from security solutions provider Resilient Systems and market research firm Ponemon breaks down key areas of continued failure for businesses.
66 percent of the security and IT professionals surveyed said their organizations are unprepared to recover from a cyber attack, and 75 percent lack a formal incident response plan, which has not changed since last year.