Bloomberg discusses a recent spate of cyber attacks against specific critical infrastructure targets that effectively shut down a pipeline data system:

A cyber attack that hobbled the electronic communication system used by a major U.S. pipeline network has been overcome.

Energy Transfer Partners LP was confident that, after 6 p.m. New York time on Monday, files could safely be exchanged through the EDI platform provided by third-party Energy Services Group LLC, the pipeline company said in a notice. Earlier in the day, it reported a shutdown of the system because of an attack, while saying there was no effect on the flow of natural gas.

The EDI system conducts business through a computer-to-computer exchange of documents with customers. Though it’s not clear who was responsible for the attack, it comes after U.S. officials warned in March that Russian hackers are conducting a broad assault on the nation’s electric grid and other targets. Last month, Atlanta’s municipal government was hobbled for several days by a ransomware attack.

Energy Transfer, run by billionaire Kelcy Warren, isn’t the only pipeline company using EDI. Other operators with similar systems include Kinder Morgan Inc. and Tallgrass Energy Partners LP, according to their websites. Representatives for Kinder and Tallgrass said the companies’ systems weren’t affected.

It is important to note the distinction here: a communications network was attacked, not the actual gas pipeline operational network itself. Although light on details, it seems there was no actual method for the attackers to disrupt the pipeline, only inflicting damage to the communications infrastructure.

Expect more similar attacks to occur in the future. Causing outages on the communications networks could leads to operational issues. Often times the operators will bring down the operational networks to ensure personnel safety or avoid physical damage due to lack of adequate monitoring capabilities. There will likely be no direct damage.

In these situations the operational capabilities may end up as collateral damage, not the primary target.

Japan Today reports on a staggering number of personal information leaks as a result of cyber attacks targeting Japanese companies:

There were 3.08 million cases of personal information definitely or probably being leaked through cyberattacks on companies or other entities in Japan in 2017, a Kyodo News tally shows.

These figures are based on data security breaches at 82 entities last year — 76 companies, four administrative entities and two universities, according to the tally of confirmed or suspected data breaches.

The corresponding number of cases totaled 2.07 million in 2015, before surging to 12.6 million in 2016 due to a massive data leak at travel agency JTB Corp.

However, the amount of damages stemming from stolen credit card information hit an all-time high last year, as credit card information was involved in 530,000 cases, or roughly one-sixth of the total.

The total amount of damages roughly doubled from a year earlier to 17.6 billion yen ($166 million) in 2017, according to the Japan Consumer Credit Association.

Yet these figures probably understate the extent of the problem, according to some experts.

There are likely a host of companies unwilling to report data breaches for fear of legal liability or public embarrassment. Take these numbers with a huge spoonful of salt because it is almost guaranteed to be much larger number.

Japan has been making strong strides to increase cyber security capability throughout the years. However, there is a lack of emphasis on computer science in grade, middle, and high schools. A concerted, strategic focus on educating young folks on cyber security, an extremely important topic, is essentially non-existent. Until the Japanese educational system catches up with the societal shift towards more data-driven enterprises, Japan will unfortunately remain a cyber security laggard.

Dark Reading enumerates the escalating potential for destructive cyber attacks and false flag operations to take place as tensions rise on the global geopolitical stage:

Geopolitical tensions typically map with an uptick in nation-state cyberattacks, and security experts are gearing up for more aggressive and damaging attacks to ensue against the US and its allies in the near-term, including crafted false flag operations that follow the strategy of the recent Olympic Destroyer attack on the 2018 Winter Olympics network.

As US political discord escalates with Russia, Iran, North Korea, and even China, there will be expected cyberattack responses, but those attacks may not all entail the traditional, stealthy cyber espionage. Experts say the Trump administration’s recent sanctions and deportation of Russian diplomats residing in the US will likely precipitate more aggressive responses in the form of Russian hacking operations. And some of those could be crafted to appear as the handiwork of other nation-state actors.

The idea of false flag operations are rather easy to pull off in cyber space compared to traditional kinetic attacks. It takes a huge amount of sophistication to properly execute this type of operation and ensure the fingers are pointed at the framed nation state. But it can be done, and there are strong players with this capability.

Consider how well Russia, China, North Korea, and Iran approach cyber. Western countries like US, UK, Australia, Netherlands, and other allies are extremely capable. Even a smaller yet highly advanced country like Israel could pull off a false flag operation. It is well within all these nations capacities to successfully misdirect cyber attack attribution.

As a quick aside, I suspect Russia was behind WannaCry even though the US, UK, and other government have unequivocally attributed it to North Korea. This specific attack does not pass the smell test, and was just far too sloppy for a country like North Korea to execute so poorly, especially when ransomware is their prime expertise. There was motivation for Russia to false flag WannaCry, and I discussed this with Japanese media at length early last year after the outbreak occurred.

There are many political reason to both publicly shame or to hide Russia as the culprit. The former would fit in with exposing Russia for all their malicious global cyber activity, while the latter is exactly the modus operandi for the Trump Administration. Furthermore, if Russia did false flag WannaCry, there is also a strong possibility the US intelligence community and its partners would rather keep their knowledge of such hidden from the public. This would allow Russia to conduct further similar operations, with the various intelligence agencies collecting additional data on their tactics and strategy.

While I obviously could be wrong, I still feel as if something does not sit right.

Security experts worry that Russia will continue to ratchet up more aggressive cyberattacks against the US – likely posing as other nations and attack groups for plausible deniability – especially given the success of recent destructive attack campaigns like NotPetya. Not to mention the successful chaos caused by Russia’s election-meddling operation during the 2016 US presidential election.

That doesn’t mean Russia or any other nation-state could or would cause a massive power grid outage in the US, however. Instead, US financial services and transportation networks could be next in line for disruption via nation-state actors, experts say.

Russia definitely has demonstrated sophistication far beyond what the US had expected. Their ability to have penetrated so far and wide is a testament to their strong focus on leveraging cyber for geopolitical activity. It is a fundamental shift in their national intelligence and military strategy, but one that is generally inline with what they have done throughout history.

The likelihood Russia actually attacks US critical infrastructure is extremely low, with the exception of potential isolated incidents against smaller players in the industry. As the above quote rightly states, Russia will likely focus on financial services more than any other area. The US needs to be prepared, and I am concerned the maturity of these operators is not at the level it needs to be to properly withstand a sophisticated nation state attack.

Wired discusses the recent Atlanta ransomware attack and how actors leveraging SamSam are selective about their targets, often choosing organizations it believes will end up paying the ransom:

Attackers deploying SamSam are also known to choose their targets carefully—often institutions like local governments, hospitals and health records firms, universities, and industrial control services that may prefer to pay the ransom than deal with the infections themselves and risk extended downtime. They set the ransoms—$50,000 in the case of Atlanta—at price points that are both potentially manageable for victim organizations and worthwhile for attackers.

And unlike some ransomware infections that take a passive, scattershot approach, SamSam assaults can involve active oversight. Attackers adapt to a victim’s response and attempt to endure through remediation efforts. That has been the case in Atlanta, where attackers proactively took down their payment portal after local media publicly exposed the address, resulting in a flood of inquiries, with law enforcement like the FBI close behind.

From an attackers point-of-view, it is just smart business to set the ransom price at a point within reach for the victim. The actors are banking on the victims believing it is far more expedient and less expensive to pay the ransom rather than endure a lengthy outage.

Although it appears easier to pay a ransom to rapidly resume operations, the overall economics of a ransomware attack are not that simple. Even if a victim pays a ransom they will need to essentially rebuild their entire network from the ground up to ensure they completely eradicate any trace of the attackers. Merely paying a ransom does not guarantee the actors did not leave a backdoor somewhere within the network.

Performing a cost-benefit analysis is important in these situations, weighing the difference in lost revenue due to the ransomware attack, lost productivity, cost to pay the ransom versus cost to remediate the infection. This is no easy task, with no black-and-white answer. The chosen route ultimately depends on the business and the types of daily operations it undertakes. Ransomware attacks are not one size fits all.

In the specific case of Atlanta, it sounds like mission critical data was encrypted in the ransomware attack. That the city cannot recover this data through local or cloud-based backups demonstrates a situation faced all too often: lack of proper foresight and planning. Had the city safely stored mission critical data off site in addition to its local storage, then forgoing payment and merely rebuilding would be an easy choice. But it seems the situation is much more complicated.

The City also suffered a cyberattack in April 2017, which exploited the EternalBlue Windows network file sharing vulnerability to infect the system with the backdoor known as DoublePulsar—used for loading malware onto a network. EternalBlue and DoublePulsar infiltrate systems using the same types of publicly accessible exposures that SamSam looks for, an indication, Williams says, that Atlanta didn’t have its government networks locked down.

“The DoublePulsar results definitely point to poor cybersecurity hygiene on the part of the City and suggest this is an ongoing problem, not a one time thing.”

Though Atlanta won’t comment on the details of the current ransomware attack, a City Auditor’s Office report from January 2018 shows that the City recently failed a security compliance assessment.

This is the issue: Atlanta lacks the necessary security professionals to keep the systems IT assets safe from modern attacks. This is a good lesson to be learned for other similar city governments. Get your act together and ensure security is a priority and baked into IT operations otherwise expect successful attacks to continue to hinder operations.

The Daily Beast dissects a recent leak of a classified National Security Agency document outlining how Russian intelligence interfered with the 2016 Presidential election through its highly comprehensive information warfare campaign:

The dumped intelligence report offered some of the best confirmation of Russian meddling in the U.S. election, providing more evidence to tamp down the claims of President Trump and his legions that it was China or a guy in a basement that hacked the Democratic National Committee and many other current and former American officials.

The techniques targeting election officials—spam that redirects recipients to false email login pages yielding passwords to Russian hackers—appear eerily familiar to those used by the GRU against many other U.S. targets in 2015 and 2016.

To the disappointment of Trump’s biggest haters, the NSA leak provides no evidence that Russia changed any votes. And that makes sense, as Russian altering of the tally in favor of their preferred candidate Donald Trump would be sufficient justification for war—one Russia would lose against the U.S.

The Kremlin sought instead to create the perception among Americans that the election may not be authentic in order to push their secondary election effort: Undermine the mandate of Hillary Clinton to govern, should she win.

The idea that Russia hacked actual electronic voting machines is a non-story. That is not how Russian intelligence interfered with the election. Russia did not use the traditional concept of computer hacking to effectively undermine the Clinton campaign. Instead, their comprehensive strategy was old fashioned information warfare, something Russia is extremely capable at executing.

Through the skilled use of video manipulation, meme creation, small cells targeting specific conversations on various social networks, and a wide array of automated bots, Russia effectively mounted one of the most dynamic and well executed information warfare campaigns in history. The only outstanding question at this juncture is whether or not there was any collusion, quid pro quo or otherwise, between the Trump campaign and Moscow. This remains to be seen based on whatever Special Council Mueller and his team is capable of finding.

In America, it sought not to alter the tally, but to create the perception that it’s possible—and instill doubt among Americans in the process. Hacking of voter rolls rather than machines creates an impression in the voters’ psyches without provoking the U.S. into open conflict.

This is likely going to be one of the longest lasting affects of Russian interference in the US election: sowing doubt and discord among the American populace, so much so it begins to break down the trust in governmental institutions, potentially leading towards a collapse of the Republic itself.

That may sound over the top, but it is exactly the outcome Putin desires. He would like America and Russia on a level playing field once again. Since the decline of the Soviet Union, America has constantly been atop Russia, overshadowing it in every aspect of political and military capability. That is, until Putin came into power and changed the game once again.

At this point one has to wonder exactly how capable the United States is with offensive cyber operations. Is the US capable of pulling off a similar campaign in a major country like what Russia did in 2016?

The Seattle Times is reporting a Boeing manufacturing plant was hit with the ostensibly North Korean developed WannaCry ransomware even though the malware was unleashed over a year ago, and a patch has been available from Microsoft since March 2017:

Mike VanderWel, chief engineer at Boeing Commercial Airplane production engineering, sent out an alarming alert about the virus calling for “All hands on deck.”

“It is metastasizing rapidly out of North Charleston and I just heard 777 (automated spar assembly tools) may have gone down,” VanderWel wrote, adding his concern that the virus could hit equipment used in functional tests of airplanes ready to roll out and potentially “spread to airplane software.”

VanderWel’s message said the attack required “a batterylike response,” a reference to the 787 in-flight battery fires in 2013 that grounded the world’s fleet of Dreamliners and led to an extraordinary three-month-long engineering effort to find a fix.

So an assembly plant was affected, but no word on how the WannaCry ransomware penetrated the operational network. This vital piece of information is necessary to better comprehend exactly what happened, why it happened, and how to prevent future similar breaches.

Reuters reports on an Under Armour data breach affecting upwards of 150 million MyFitnessPal user accounts:

The stolen data includes account user names, email addresses and scrambled passwords for the popular MyFitnessPal mobile app and website, Under Armour said in a statement. Social Security numbers, driver license numbers and payment card data were not compromised, it said.

It is the largest data breach this year and one of the top five to date, based on the number of records compromised, according to SecurityScorecard.

Larger hacks include 3 billion Yahoo accounts compromised in a 2013 incident and credentials for more than 412 million users of adult websites run by California-based FriendFinder Networks Inc in 2016, according to breach notification website LeakedSource.com.

Under Armour said it is working with data security firms and law enforcement, but did not provide details on how the hackers got into its network or pulled out the data without getting caught in the act.

I have yet to locate a single article discussing how the breach occurred or any potential vulnerability exploited by the attackers to gain access to MyFitnessPal data.

If you use MyFitnessPal, I strongly suggest you immediately login and change your password, especially if you reused a password you are using elsewhere [like the vast majority of internet users].

Lifehacker Australia discusses yet another attack vector cyber security professionals need to consider, and one not many are all that familiar with at the moment:

However, the recent attack detected by Neustar was different. While the types attacks, like DNS reflection attacks aren’t new, the targeting is changing.

George said some early IPv6 implementations were more vulnerable to certain threat vectors because of scale. While companies were in the early stages of IPv6 deployment, they would only deploy the protocol on limited segments of their LANs. As a result, there was limited network capacity and this created a point of weakness that was susceptible to a DDoS attack.

The attraction in using IPv6 for attacks is a lack of awareness and skills, said George.

“A lot of people don’t know it’s there or realise it’s even turned on or have it in their threat profile. They don’t have the same level of protections in place or, if they have a set of plans or run-books for attacks, they don’t have a plan for IPv6,” said George.

Often, this is there result of a focus on deployment leading to a lower prioritisation on security. This is simply because the perceived threat of IPv6-specific attacks is still low.

“They’re deploying it but not focusing on the security side of things. People are working on the assumption that it’s not much of an attack vector”.

In theory and practice, for the most part, defending against IPv6 attacks is no different than IPv4 attacks. If IPv6 is enabled on a networks infrastructure, then the security devices need to be aware of this traffic type and be properly configured to inspect the traffic and act appropriately. If the routers are allowing IPv6 traffic to flow through the network, then the firewalls, intrusion prevention devices, endpoints security suites, and other security tools need to be aware of this and ready to act.

The New York Times has a detailed report on how a single defense contracting idea turned into the huge Cambridge Analytica data-stealing scandal the entire globe is aware of today:

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.

It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

It should come as no surprise Palantir is somehow peripherally involved in this operation. The company was founded by a number of extremely intelligent and influential Silicon Valley folks, funded by CIA’s In-Q-Tel venture capital arm, and primarily focuses on business with the United States Intelligence Community and other global intelligence agencies like UK’s Government Communications Headquarters (GCHQ).

Palantir’s expertise is data science, sometimes using shady tactics, and the Cambridge Analytica operations was definitely both. Leveraging the access provided by Facebook was a smart technique for collecting data, analyzing it, and then generating psychological profiles of various political leanings. This ultimately resulted in what we know today: targeted advertising and propaganda with the intent of poisoning one candidate, in the hopes of increasing the viability of another. And it worked. Very well.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday.

Peter Thiel being on the board of both companies, and a co-Founder of Palantir, seems extremely shady and problematic. The optics appear as if he may have had some inside knowledge of Facebook’s data platform deficiencies, and then potentially shared that information with Cambridge Analytica through Palantir. This would have been one method for CA to learn about techniques for exploiting Facebook user data. But I have not seen any strong evidence to backup this claim, only circumstantial discussions.

This story is not even close to being fully exposed. I suspect there is a lot more we will learn within the coming weeks as new revelations are revealed.

CSO Online reports on how the GoScanSSH malware is targeting Linux operating systems but somehow manages to avoid government and military operated servers:

For the initial infection, the malware uses more than 7,000 username/password combinations to brute-force attack a publicly accessible SSH server. GoScanSSH seems to target weak or default credentials of Linux-based devices, honing in on the following usernames to attempt to authenticate to SSH servers: admin, guest, oracle, osmc, pi, root, test, ubnt, ubuntu, and user.

Those and other credential combinations are aimed at specific targets, such as the following devices and systems: Raspberry Pi, Open Embedded Linux Entertainment Center (OpenELEC), Open Source Media Center (OSMC), Ubiquiti networking products, jailbroken iPhones, PolyCom SIP phones, Huawei devices, and Asterisk systems.

After a device is infected, the malware determines how powerful the infected system is and obtains a unique identifier. The results are sent to a C2 server accessed via the Tor2Web proxy service “in an attempt to make tracking the attacker-controlled infrastructure more difficult and resilient to takedowns.”

The researchers determined the attack has been ongoing for at least nine months — since June 2017 — and has at least 250 domains; “the C2 domain with largest number of resolution requests had been seen 8,579 times.”

This sounds like a particularly nasty type of attack, but one that ought to be fairly simple to prevent. Considering it is easy to determine what the target system types are, and how the malware functions, deploying the right defense strategy is actually quite straightforward.

A quick couple of very simplistic examples immediately come to mind:

  1. Delete all unnecessary users from the above list or rename unneeded ones. In most cases guest, oracle, osmc, pi, test, ubnt, ubuntu, and user are unnecessary and can be removed. If they need to be kept, as I said, rename the accounts.
  2. For all the needed accounts, ensure ssh access is turned off. There is never a reason to SSH directly as root. This is the entire point of the sudo and su commands – login as another user and then use one of those commands to perform functions as root or other users.

There are plenty of other methods for combating this attack to make it more difficult to be breached. But simple actions like the above are often overlooked.

Talos provided both an IP blacklist and a domain blacklist that the malware uses to determine if it should continue attempts to compromise the system. Some of those domains include: .mil, .gov, .army, .airforce, .navy, .gov.uk, .mil.uk, govt.uk, .police.uk, .gov.au, govt.nz, and .mil.nz.

If the system or device is on neither set of blacklists, Talos “believes the attacker then compiles a new malware binary specifically for the compromised system and infects the new host, causing this process to repeat on the newly infected system.”

That is an interesting and novel approach to avoiding governmental systems. It is not unprecedented but definitely not a method often seen.

NPR has an update to a recent ransomware attack against the city of Atlanta stating the city has yet to fully recover and some governmental data remains encrypted while awaiting the ransom payment:

“Many city employees have been without access to Internet and email since Thursday after hackers locked some of its systems and demanded a $51,000 payment. The city says it completed part of its investigation of the cyberattack, but it’s working on restoring full service.”

Mayor Keisha Lance Bottoms told reporters that cybersecurity is now a top priority for the city.

“There’s a lot of work that needs to be done with our digital infrastructure in the city of Atlanta and we know that year after year, that it’s something that we have to focus on and certainly this has sped things up.”

Bottoms says the city has continued to operate despite the cyberattack.

Asked whether the city would pay the ransom to fully restore the city’s network, the mayor told reporters that she would confer with federal authorities on the best course of action.

What a horrible situation. It is terrible to read about a major city like Atlanta fighting to recover from a ransomware attack. The fact a breach of this nature, any breach in fact, occurred is unsettling. Ransomware in particular has been all over the news lately, and the city should have been prepared, even with the standard excuse of having limited funding available for cyber security.

There are a myriad of open source and inexpensive yet effective solutions available. Lack of funding is never a fully adequate excuse unless the actors are nation state. In that case, almost no organization or enterprise is safe.

Atlanta should have been proactively defending its IT assets rather than just waiting around for the worst to happen. That the mayor made the above statement in the middle of recovery efforts demonstrates a complete and utter lack of awareness and interest in budgeting for these events before they happen. There is almost no worse way to approach cyber security. Much like safety, it needs to be in the budget, and the correctly experienced cyber security professionals need to be employed to manage the defenses.

This is a hard-learned lesson for Atlanta, and one they likely will not forget anytime soon.

Mozilla has just launched a new extension for their web browser to basically de-creepify Facebook usage by containing its activity within a sandbox of sorts, ensuring anything you do on the site cannot be shared with third-party companies:

The pages you visit on the web can say a lot about you. They can infer where you live, the hobbies you have, and your political persuasion. There’s enormous value in tying this data to your social profile, and Facebook has a network of trackers on various websites. This code tracks you invisibly and it is often impossible to determine when this data is being shared.

Facebook Container isolates your Facebook identity from the rest of your web activity. When you install it, you will continue to be able to use Facebook normally. Facebook can continue to deliver their service to you and send you advertising. The difference is that it will be much harder for Facebook to use your activity collected off Facebook to send you ads and other targeted messages.

This Add-On offers a solution that doesn’t tell users to simply stop using a service that they get value from. Instead, it gives users tools that help them protect themselves from the unexpected side effects of their usage. The type of data in the recent Cambridge Analytica incident would not have been prevented by Facebook Container. But troves of data are being collected on your behavior on the internet, and so giving users a choice to limit what they share in a way that is under their control is important.

In light of the recent Facebook scandal it is time to rethink the type of data we share with unscrupulous companies like Facebook. While there is value in the services Facebook provides, it all comes at a cost to your privacy. You are not a Facebook customer, but one of its products. All the web surfing you do, and the associated data Facebook can collect around that activity, is valuable and monetizable. This is how the company provides a free service to you, Mr. and Mrs. Product.

Mozilla creating this add-on is just a band-aid, but a valuable one nonetheless. If you use Mozilla, I strongly suggest you install this add-on. If you use Chrome, I strongly suggest you switch to Mozilla. The latest iterations are lightning fast, just like Chrome, but far less privacy invasive than the Google developed browser. It is well worth making the switch.

Personally, I use Safari on macOS and have done everything I can to limit my exposure and decrease any unnecessary risk. Safari makes it easy but there are improvements that could be made. Hopefully a similar extension will be developed for Safari at some point.

In the interim, switch to Firefox and start using this add-on. It is an outstanding ingredient for your freedom and independence from these companies that could care less about privacy because they merely want to turn a strong profit more than anything else.

TechCrunch reports the Cambridge Analytica story may have just taken a turn for the worse with Chris Wylie, the whistle-blower responsible for these powerful allegations, stating the 50M number was merely a safe number to share with the media:

Giving evidence today, to a UK parliamentary select committee that’s investigating the use of disinformation in political campaigning, Wylie said: “The 50 million number is what the media has felt safest to report — because of the documentation that they can rely on — but my recollection is that it was substantially higher than that. So my own view is it was much more than 50M.

Somehow I am unsurprised the number will ultimately turn out to be much larger than Facebook is willing to admit. The company is in damage control, especially after having lost $60B in value since the shocking revelations were unveiled almost ten days ago.

Facebook has previously confirmed 270,000 people downloaded Kogan’s app — a data harvesting route which, thanks to the lax structure of Facebook’s APIs at the time, enabled the foreign political consultancy firm to acquire information on more than 50 million Facebook users, according to the Observer, the vast majority of whom would have had no idea their data had been passed to CA because they were never personally asked to consent to it.

Instead, their friends were ‘consenting’ on their behalf — likely also without realizing.

In my own anecdotal testing, I have while most people are conscious that Facebook is not necessarily to be trusted, they never thought these applications operated the way they do. That is to say, nobody I have spoken with understood their friends, or their friends-of-friends data would be shared with third-party applications they interacted with on Facebook. That these applications knowingly surveilling Facebook accounts is complete news to most of the people I talked to.

This whole story keeps getting worse as the days pass. I wonder how long it will take, and what else will be revealed, before it his rock bottom.

Federal News Radio reports on the US Navy’s attempt to remove a management bureaucracy layer by eliminating the previous executive-level Navy Chief Information Officer position:

A memo signed last Friday by Thomas Modly, the new undersecretary of the Navy, effectively eliminates the office of the Department of the Navy chief information officer, formerly an influential, separate position within the Secretary of the Navy’s organizational chart.

Going forward, Modly himself will take over the pro-forma title of DON CIO along with all of its responsibilities and authorities. A handful of staff will remain assigned to a restructured and downsized office, but only to handle the IT duties that federal law explicitly requires the secretaries of the military departments to perform.

The changes to the CIO role come as part of a broader management restructuring Modly directed just a few months after his confirmation as the Navy’s number-two civilian official.

The memo fully eliminates the deputy undersecretary of the Navy for management, the organization that, until last week, oversaw the DON CIO and some other functions, including its Office of Strategy and Innovation.

On the surface this sounds like a really bad idea(tm). There needs to be some senior executive leadership overseeing how the Department of the Navy handles not just information technology assets, but the associated cyber security requirements to adequately defend Navy networks.

The new arrangement appears to de-emphasize the notion that the two sea services should operate under one set of IT policies, but also reflects the realities of the different directions the Navy and Marine Corps have taken. The split was noticeable after a 2013 restructuring of what had previously been a single contract for a fully-outsourced Navy-Marine Corps Intranet (NMCI).

In the intervening years, the Navy and Marine Corps have chosen to pursue different models under the Navy’s Next Generation Enterprise Network (NGEN) contract.

The Marines have opted for a fully government owned-and-operated network known as the Marine Corps Enterprise Network (MCEN), including a cloud computing strategy that relies largely on a Marine-operated cloud computing center in Kansas City (MCEITS).

Meanwhile, the Navy has leaned toward an operating model in which it owns most of its infrastructure, but relies on the NGEN contract to perform most of the day-to-day labor involved in running its IT networks in the continental U.S.

Modly’s decision to devolve more control to the services also potentially reduces confusion about the various positions in the Navy that can lay claim to the title of CIO.

NMCI has nothing been short of an utter train wreck. It is no surprise the Marine Corp pulled out of that disaster to go their own separate, more agile way of handling IT. Not only are the Marines doing it for less cost, but the service levels have dramatically increased. I never heard a single person who was happy with NMCI.

Government owned, government operated is a far better model than allowing a contractor to come in and nickel and dime the Navy for every little thing they do. NMCI, and by extension the Overseas Navy Enterprise Network (ONE-NET), have never been truly successful. I foresee NGEN turning into the same type of disaster ONE-NET was unless there are some major modifications made to the way the contract is executed.

The Navy has, and continues, to do things its own way compared to the rest of the US military. After all, this is the department still paying Microsoft to support Windows XP because there are too many outstanding deployments of the operating system in mission critical areas. Rather than paying to upgrade those systems, the Navy is paying for security patches. This is just outright unfathomable. So maybe its makes sense the Navy has opted to eliminate the CIO position because, it could be argued, they were not doing their job to begin with.

Bottom line, removing the CIO position demonstrates a lack of understanding of what role a CIO should play in a major organization like the Department of the Navy. I am extremely concerned about the direction the Navy is going and wonder what unintended consequences there will be from this change.

ZDNet reports the Internet Engineering Task Force (IETF) has finally approved version 1.3 of Transport Layer Security (TLS), the key protocol that enables HTTPS on the web:

TLS is the successor to SSL and version 1.3 was designed to prevent attacks that undermined client and server communications secured with TLS 1.2 and earlier versions.

The main benefit of TLS 1.3 is that it supports stronger encryption and drops a host of legacy encryption algorithms.

It also introduces 0-RTT or zero round trip time resumption, which is designed to speed up connections on sites that users frequently visit and is expected to deliver lower latency on mobile networks.

Major internet players have been gradually upgrading to TLS 1.3 over the past few years, though there have been hiccups and obstacles to its deployment.

While Chrome, Firefox and Opera and Edge already support TLS 1.3, they don’t by default. A study by Cloudflare, which enabled TLS 1.3 by default on the server side last year, found that in December that just 0.6 percent of traffic was secured with TLS 1.3. The cause was in part due to how network appliance vendors had implemented TLS 1.2.

Best I can tell is TLS 1.3 does not change SSL decryption when security devices sit inline and essentially act as a man-in-the-middle attack. If your employer is, say, using an intrusion prevent system or web gateway to inspect traffic on the network, and is performing SSL decryption on HTTPS connections, TLS 1.3 does not offer any privacy increase since the decryption capability is still completely possible.