Tag

law

Browsing

WIRED discusses the EU General Data Protection Regulation – GDPR – and how the new privacy law will likely change the way web sites collect data on its users:

Instead, companies must be clear and concise about their collection and use of personal data like full name, home address, location data, IP address, or the identifier that tracks web and app use on smartphones. Companies have to spell out why the data is being collected and whether it will be used to create profiles of people’s actions and habits. Moreover, consumers will gain the right to access data companies store about them, the right to correct inaccurate information, and the right to limit the use of decisions made by algorithms, among others.

The law protects individuals in the 28 member countries of the European Union, even if the data is processed elsewhere. That means GDPR will apply to publishers like WIRED; banks; universities; much of the Fortune 500; the alphabet soup of ad-tech companies that track you across the web, devices, and apps; and Silicon Valley tech giants.

As an example of the law’s reach, the European Commission, the EU’s legislative arm, says on its website that a social network will have to comply with a user request to delete photos the user posted as a minor — and inform search engines and other websites that used the photos that the images should be removed. The commission also says a car-sharing service may request a user’s name, address, credit card number, and potentially whether the person has a disability, but can’t require a user to share their race. (Under GDPR, stricter conditions apply to collecting “sensitive data,” such as race, religion, political affiliation, and sexual orientation.)

If you do anything on the web, which in 2018 is tantamount to asking someone if they have electricity, then this is a must read. Europe really is at the forefront of privacy law, and we can only hope other nations will follow suit. But since the web knows no borders, GDPR will apply to every company and organization operating on the web. So as a netizen, become familiar with this regulation and know what is, and is not, allowed.

There is a lot of talk about GDPR all over the technology industry, but specifically the web. In light of todays Cambridge Analytica story detailing how the company easily collected data from Facebook, protecting consumer privacy from continued breaches of trust is paramount. A lot of money is being expended on GDPR compliance and I really wonder just how it will change the landscape, if it will change the landscape.

Just as Cambridge Analytica was able to exploit loopholes in Facebook’s system, I wonder what company will be the first to find and leverage loopholes in GDPR, and what will happen to them for doing so.

TechRepublic is reporting on a Securities and Exchange Commission update to a 2011 cyber security statement, stating US publicly traded companies will be required to disclose in a timely manner when they have been breached or there are material cyber security risks:

First, and most importantly, is that the SEC is essentially extending its interpretation of older disclosure rules to cover cybersecurity. If you are familiar at all with SEC disclosure guidelines under Securities Act of 1933 and the Securities Exchange act of 1934 these new guidelines won’t appear very different—the SEC even wants disclosures filed on the same forms.

As the original 2011 statement said, “although no existing disclosure requirement explicitly refers to cybersecurity risks and cyber incidents, companies nonetheless may be obligated to disclose such risks and incidents.”

What this new interpretive statement does is reinforce and expand the 2011 original, along with adding an important section designed to crack down on insiders trading stock based on undisclosed knowledge of a cyber attack—something important to consider in the wake of stock dumping accusations surrounding the Equifax breach (of which executives were later cleared in an internal investigation).

What the SEC has to say on that particular front is clear: “directors, officers, and other corporate insiders must not trade a public company’s securities while in possession of material nonpublic information, which may include knowledge regarding a significant cybersecurity incident experienced by the company.”

In other words, disclose incidents immediately to prevent even the appearance of impropriety.

It should be obvious to anyone that disclosure should be mandatory. However, most companies will act in the best interest of the officers running the company, and therefore often times will attempt to hide breaches from the public. This is harmful in so many ways that it is almost unbelievable in 2018 there are no actual legally binding requirements.

NBC News on the mounting pressure Tokyo faces to do something about its smoking problem prior to the upcoming 2020 games:

The scent of chicken skewers cooking over a charcoal grill mixes with another distinctive odor — cigarette smoke from a handful of white-collar workers unwinding after a busy day at the office.

This scene plays out in small bars across the Japanese capital, where many restaurants and watering holes still allow their customers to smoke.

But lawmakers here are coming under pressure to implement tougher restrictions against passive smoking before Tokyo hosts the 2020 Summer Olympics, with the World Health Organization and the International Olympic Committee leading the calls for broad bans in public spaces.

The health ministry estimates that about 15,000 deaths in the country each year are linked to second-hand smoke. But the habit has proved tough to kick.

Anti-tobacco campaigners have a theory about what’s behind the government’s reluctance to take strong action against smoking. Japan’s finance ministry still holds a one-third stake in the ownership of Japan Tobacco, the country’s biggest seller of cigarettes. That means a portion of the firm’s profits flow into in the government’s coffers.

If there is one complaint about Tokyo at the top of my list it would be this. Smoking is so pervasive here it is almost an afterthought. Some wards have enacted regulations about smoking, but there is no all-out law covering Tokyo Metropolitan.

The small yakitori shops or izakaya’s already smell bad enough because of the smoke from the BBQ or kitchen, but add some cigarette stench on top and you have a recipe for disaster. I hate going home with that sickly scent caked all over my clothes, especially during winter when wearing a jacket.

Senator Bill Nelson, ranking Democrat on the Commerce Committee, has revived the Data Security and Breach Notification Act, a bill calling for jail time for corporate executives who conceal data breaches:

If it becomes law, then it would overrule the many statewide laws regulating breach notifications by establishing a nationwide standard.

There’s a requirement for companies to notify customers within 30 days, along with the potential criminal penalties.

It also directs the FTC to develop standards businesses must follow if they collect customer information, like naming a person in charge of information security, establishing a process to identify vulnerabilities, have a process for the disposal of information, and other items in that vein.

In a statement, Nelson said “Congress can either take action now to pass this long overdue bill or continue to kowtow to special interests who stand in the way of this commonsense proposal. When it comes to doing what’s best for consumers, the choice is clear.”

In 2015 Nelson’s bill was one of several introduced to deal with the issue of protecting customers from these leaks and it’s likely that it will again have company.

It is doubtful the bill goes anywhere, and this is likely all just for show for Nelson’s constituents. The bill is a pipe dream and will almost certainly never become law.

Germany is exploring the legalities around responses to cyber attacks and is recognizing the country may need to change its constitution to allow for striking back to cyber actors:

Germany may need to change its constitution to allow it to strike back at hackers who target private computer networks and it hopes to complete any legal reforms next year, a top Interior Ministry official said on Monday.

State Secretary Klaus Vitt told Reuters the government believed “Significant legal changes would be needed” to allow such “Hack back” actions.

“A constitutional change may be needed since this is such a critical issue,” Vitt said on the sidelines of a cyber conference organized by the Handelsblatt newspaper.

Vitt said much would depend on the outcome of coalition talks in Germany of which cyber capabilities formed a part.

Top German intelligence officials told parliament last month they needed greater legal authority to strike back in the event of cyber attacks from foreign powers.

We can all thank Russia and Putin for forcing this issue globally. Their exceptional use of cyber attacks coupled with propaganda has changed the conventional approach to using cyber as part of a broader geopolitical strategy.

Additionally, as Japanese Prime Minister Shinzo Abe continues to explore changing Article 9 of the Japanese Constitution, I suspect this will become an issue here in Japan as well. Hacking back against malicious actors is not as cut-and-dry as some would suspect. Specific legal authority is required, otherwise the country could face legal issues and liability, especially if attribution is incorrect and an innocent bystander is attacked.

According to a closely watched case, the Ninth Circuit Court of Appeals just ruled sharing passwords is considered a federal offense:

In the majority opinion, Judge Margaret McKeown wrote that “Nosal and various amici spin hypotheticals about the dire consequences of criminalizing password sharing. But these warnings miss the mark in this case. This appeal is not about password sharing.” She then went on to describe a thoroughly run-of-the-mill password sharing scenario—her argument focuses on the idea that Nosal wasn’t authorized by the company to access the database anymore, so he got a password from a friend—that happens millions of times daily in the United States, leaving little doubt about the thrust of the case.

The argument McKeown made is that the employee who shared the password with Nosal “had no authority from Korn/Ferry to provide her password to former employees.”

At issue is language in the CFAA that makes it illegal to access a computer system “without authorization.” McKeown said that “without authorization” is “an unambiguous, non-technical term that, given its plain and ordinary meaning, means accessing a protected computer without permission.” The question that legal scholars, groups such as the Electronic Frontier Foundation, and dissenting judge Stephen Reinhardt ask is an important one: Authorization from who?

If the account holder authorized someone to access their account using their credentials, then does that not constitute authorization, as written in the CFAA? The law does not define which party is required to provide authorization in order to prevent triggering a violation of the CFAA.

  • Is the account holder allowed to authorize access?
  • Is authorization required from the system owner?

Imagine all the scenarios that could play out based on either of those authorization requirements. As the article rightly discusses, if the latter is needed, everyone sharing Facebook, Spotify, Apple, Netflix, and other similar account types are considered in violation of the CFAA and therefore should be prosecuted.

As with most US laws around the idea of hacking, the CFAA is in desperate need of updating.

China is close to codifying a new controversial cyber security law most foreign business are going to have a tough time swallowing:

The draft law would require companies to “comply with social and business ethics” and “accept supervision by both government and the public,” according to the state news agency Xinhua.

It would also stipulate that Chinese citizens’ personal information and other data collected in China must be housed in the country.

A new provision would also order Beijing to “monitor and deal with threats from abroad to protect the information infrastructure from attack, intrusion, disturbance or damage.”

This should not come as a surprise, and I wonder what makes it so controversial in the first place. Looked at from the obvious Chinese perspective, many highly popular cyber security businesses are from the United States. As far as China is concerned, any American business is tied up with the American government, and thus merely an extension of the NSA, CIA, and other intelligence agencies. This law allows China to maintain some semblance of control of the cyber security being provided in their country.

Of course foreign business are going to complain. This law is being codified in public, with the Chinese government basically admitting bias. Contrast that to how the United States handles this same issue, whereby there is no official written law on the books, but rather, a de-facto ban against Chinese cyber security firms.

SCOTUS further chips away at fourth amendment, allows FBI to hack Americans without a warrant:

But, the FBI’s malware actually grabbed more than just suspects’ IP addresses. It also beamed their username and some other system information to the FBI; information that is undoubtedly within a user’s computer—no two ways about it.

This doesn’t phase the judge either, who writes that the defendant “has no reasonable expectation of privacy in his computer,” in part because the malware collected a relatively limited amount of details.

“The NIT only obtained identifying information; it did not cross the line between collecting addressing information and gathering the contents of any suspect’s computer,” he writes.

Sounds like this judge was stretching the limits of interpretation solely for the sake of being able to convict someone evil. But this is how the slope gets slippery.

Unfortunately, expect this to continue to happen moving into the future, and ultimately become the new norm:

Rumold from EFF added that “the decision underscores a broader trend in these cases: courts across the country, faced with unfamiliar technology and unsympathetic defendants, are issuing decisions that threaten everyone’s rights.”

This is my hometown and I am stunned Los Angeles leadership believes this to be a viable option for preventing human trafficking (emphasis added):

Councilwoman Martinez feels that prostitution is not a “victimless” crime, and that by discouraging johns, the incidence of the crime can be reduced. Martinez told CBS Los Angeles, “If you aren’t soliciting, you have no reason to worry about finding one of these letters in your mailbox. But if you are, these letters will discourage you from returning. Soliciting for sex in our neighborhoods is not OK.

The Los Angeles City Council voted Wednesday to ask the office of the City Attorney for their help implementing the plan.

Have Ms. Martinez and the Los Angeles City Council taken leave of their senses? This scheme makes, literally, a state issue out of legal travel to arbitrary places deemed by some — but not by a court, and without due process — to be “related” to crime in general, not to any specific crime.

There isn’t “potential” for abuse here, this is a legislated abuse of technology that is already controversial when it’s used by police for the purpose of seeking stolen vehicles, tracking down fugitives and solving specific crimes.

Potent essay in favor of strong encryption even though the US intelligence apparatus would like Americans to believe terrorists use it to hide their communications from law enforcement (demonstrably false in certain circumstances, such as Paris):

People who protect liberty have to take care not to imply, much less acknowledge, that the draconian anti-liberty measures advocated by the surveillance state crowd are justified, tactically or morally, no matter what the circumstances. Someday a terrorist will be known to have used strong encryption, and the right response will be: “Yes, they did, and we still have to protect strong encryption, because weakening it will make things worse.”

Why? Because encryption is actually a straightforward matter, no matter how much fear-mongering law enforcement officials and craven, willfully ignorant politicians spout about the need for a backdoor into protected communications. The choice is genuinely binary, according to an assortment of experts in the field. You can’t tamper this way with strong encryption without making us all less secure, because the bad guys will exploit the vulnerabilities you introduce in the process. This isn’t about security versus privacy; as experts have explained again and again, it’s about security versus security.

Moreover, as current and former law enforcement officials lead a PR parade for the surveillance-industrial complex, pushing again for pervasive surveillance, they ignore not just the practical problems with a “collect it all” regime — it drowns the spies in too much information to vet properly — but also the fundamental violation of liberty that it represents. These powers are always abused, and a society under surveillance all the time is a deadened one, as history amply shows.

Of course we need some surveillance, but in targeted ways. We want government to spy on enemies and criminal suspects, but with the checks and balances of specific judicial approval, not rubber stamps for collect-it-all by courts and Congress. The government already has lots of intrusive tools at its disposal when it wants to know what specific people are doing. But our Constitution has never given the government carte blanche to know everything or force people to testify against themselves, among other limits it establishes on power.

The ACLU has asked a US appeals court to halt the NSA from continuing to collect millions of Americans’ phone records prior to its expiration in November:

Under the USA Freedom Act, which Congress passed in June, new privacy provisions take effect on Nov. 29 that will end the bulk collection, first disclosed by former NSA contractor Edward Snowden in 2013.

The program collects “metadata” such as the number dialed and the duration of calls but does not include their content.

Arguments on Wednesday centered on whether the program may continue operating between now and November.

Henry Whitaker, a lawyer for the Obama administration, told the three-judge panel that Congress clearly intended the collection to continue while the NSA transitions to the new system.

But Alex Abdo, an ACLU lawyer, said the statute explicitly extended the same Patriot Act provisions that the court concluded do not permit bulk collection.

The judges expressed concern that, as Circuit Judge Robert Sack put it, halting the program would “short-circuit” a process already under way.

Saying the ACLU had won a “historic achievement,” Sack asked, “Why don’t you declare victory and withdraw?”

Abdo said the ongoing collection harmed the ACLU’s ability to confer with clients, such as whistleblowers, without worrying about whether the communications would be swept up by the NSA.

An appeals court ruling on NSA bulk data collection rested on an unresolved technicality rather than focusing on the constitutionality of the surveillance aspect of NSA activity. Ultimately what the court ended up saying is they are unable to rule on the bulk collection because there is no way to determine if the plaintiff’s data was collected (emphasis added):

The decision did not declare the NSA’s program, which was revealed by whistleblower Edward Snowden in 2013, to have been legal or constitutional. Rather, it focused on a technicality: a majority opinion that the plaintiffs in the case could not actually prove that the metadata program swept up their own phone records. Therefore, the plaintiffs, the court declared, did not have standing to sue.

“Plaintiffs claim to suffer injury from government collection of records from their telecommunications provider relating to their calls. But plaintiffs are subscribers of Verizon Wireless, not of Verizon Business Network Services, Inc. — the sole provider that the government has acknowledged targeting for bulk collection,” wrote Judge Stephen F. Williams.

“Today’s ruling is merely a procedural decision,” said Alexander Abdo, the American Civil Liberties Union attorney who argued against the program at the U.S. District Court. “Only one appeals court has weighed in on the merits of the program, and it ruled the government’s collection of Americans’ call records was not only unlawful but ‘unprecedented and unwarranted.’”

Despite Friday’s decision, the bulk collection program will end later this year in accordance with the USA Freedom Act, passed by Congress in June.

The NSA previously argued that its massive collection of telephony metadata was legal because the records met the legal standard of being “relevant to an authorized investigation.”

In the May decision, Judge Gerald E. Lynch described the government’s interpretation of the word “relevant” as “extremely generous” and “unprecedented and unwarranted,” saying that the program had serious constitutional concerns and was ultimately illegal. However, the court did not order the program’s closure, because Congress was due to debate the USA Freedom Act within a month’s time.

Cyberspace is a complex warfighting domain with many variables to both deter and incentivize attacks. However, a question that continues to loom over everyone’s head is this: how should a government respond to a state-sponsored cyber attack? (emphasis added)

Even as the number of highly disruptive and destructive cyberattacks grows, governments remain unprepared to respond adequately. In other national security areas, policy responses to state-sponsored activity are well established. For example, a country can expel diplomats in response to a spying scandal, issue a demarche if a country considers its sovereignty to have been violated, and use force in response to an armed attack. Clear and established policy responses such as these do not yet exist for cyberattacks for two reasons. First, assessing the damage caused by a cyber incident is difficult. It can take weeks, if not months, for computer forensic experts to accurately and conclusively ascertain the extent of the damage done to an organization’s computer networks. For example, it took roughly two weeks for Saudi authorities to understand the extent of the damage of the Shamoon incident, which erased data on thirty thousand of Saudi Aramco’s computers. Although this may be quick by computer forensics standards, a military can conduct a damage assessment from a non-cyber incident in as little as a few hours.

Second, attributing cyber incidents to their sponsor remains a significant challenge. Masking the true origins of a cyber incident is easy—states often use proxies or compromised computers in other jurisdictions to hide their tracks. For example, a group calling itself the Cyber Caliphate claimed responsibility for taking French television stationTV5 Monde off the air with a cyberattack in April 2015, and used the television station’s social media accounts to post content in support of the self-proclaimed Islamic State. Two months later French media reported that Russian state-sponsored actors, not pro–Islamic State groups, were likely behind the incident. Even when attribution is possible, it is not guaranteed that domestic or foreign audiences will believe the claim unless officials reveal potentially classified methods used to determine the identity of the perpetrator, damaging intelligence assets. Under pressure, responses are likely to be made quickly with incomplete evidence and attract a high degree of public skepticism. This creates clear risks for policymakers. Quick damage assessments could lead to an overestimation of the impact of an incident, causing a state to respond disproportionately. Misattributing an incident could cause a response to be directed at the wrong target, creating a diplomatic crisis.

Germany passed a new cyber security law earlier this summer but it apparently is not working well because of the ambiguity legalese wields (emphasis added):

This summer, Germany adopted a new law, known in German as the IT-Sicherheitsgesetz, to regulate cybersecurity practices in the country. The law requires a range of critical German industries establish a minimal set of security measures, prove they’ve implemented them by conducting security audits, identify a point of contact for IT-security incidents and measures, and report severe hacking incidents to the federal IT-security agency, the BSI (Bundesamt für Sicherheit in der Informationstechnik). Failure to comply will result in sanctions and penalties. Specific regulations apply to the telecommunications sector, which has to deploy state of the art protection technologies and inform their customers if they have been compromised. Other tailored regulations apply to nuclear energy companies, which have to abide by a higher security standard. Roughly 2000 companies are subject to the new law.

The government sought private sector input early on in the process of conceptualizing the law—adhering to the silly idea of multistakeholderism—but it hasn’t been helpful in heading off conflict. German critical infrastructure operators have been very confrontational and offered little support. Despite some compromises from the Ministry of the Interior, which drafted the law, German industry continues to disagree with most of its contents.

First, there are very few details to clarify what is meant by “minimal set of security measures” and “state of the art security technology.” The vagueness of the text is somewhat understandable. Whenever ministries prescribed concrete technologies and detailed standards in the past, they were mostly outdated when the law was finally enacted (or soon after that), so some form of vagueness prevents this. But vagueness is inherently problematic. Having government set open standards limits market innovation as security companies will develop products to narrowly meet the standards without considering alternatives that could improve cybersecurity. Moreover, the IT security industry is still immature. It is impossible to test and verify a product’s ultimate effectiveness and efficiency, leading to vendors promising a broad variety of silver bullet cybersecurity solutions—a promise that hardly lasts longer than the first two hours of deployment.

China’s attack on Github earlier this year is creating international cyber norms thanks to the lack of any substantive retaliation by the US government (emphasis added):

By that measure, the United States has been establishing plenty of norms lately. After accusing North Korea of seeking to censor Sony with a cyberattack, the US announced meaningless sanctions; there’s no sign that the US has found, let alone frozen, any of the secretive North Korea’s intelligence agency’s assets. Similarly, even though the US director of national intelligence long ago attributed the OPM hack to China, the National Security Council continues to dither about whether and how to retaliate.

When it comes to setting new norms through inaction, though, the most troubling incident is China’s denial of service attack on GitHub. Like lots of US tech successes, GitHub didn’t exist ten years ago, but it is now valued at more than $2 billion. Its value comes from creating a collaborative environment where software can be edited by dozens or hundreds of people around the world. Making information freely available is the core of its business. So when the Chinese government decided to block access to the New York Times, the paper provided access to Chinese readers via GitHub. China then tried to block GitHub, as it had the Times. But if Chinese programmers can’t access GitHub, they can’t do their jobs. The outcry from Chinese tech companies forced the Chinese government to drop its block within days.

It was a victory for free speech. Or so you’d think. But the Chinese didn’t give up that easily. They went looking for another way to punish GitHub. And found it. Earlier this year, GitHub was soon hit with a massive distributed denial of service attack. Computers in the US, Taiwan, and Hong Kong sent waves of meaningless requests to GitHub, swamping its servers and causing intermittent outages for days. The company’s IT costs skyrocketed. A similar attack was launched against Greatfire.org, a technically sophisticated anticensorship site.

A Citizens Lab report shows that this denial of service attack was actually a pathbreaking new use of China’s censorship infrastructure. Over the years, China has built a “Great Firewall” that interrupts every single internet communication between China and the rest of the world. Up to now, China has used that infrastructure to inspect Chinese users’ requests for content from abroad. Uncontroversial requests are allowed to proceed after inspection. But most requests for censored information trigger a reset signal that cuts the connection. The same infrastructure could be used to inspect foreign requests for data from Chinese sites but there’s no obvious need to do so because the Chinese sites are already under the government’s thumb.