CLOSE X
RSS Feed LinkedIn Instagram Twitter Facebook
Search:
FMG Law Blog Line

Posts Tagged ‘technology’

New Cybersecurity Trend: Data Security and Disposal Laws

Posted on: February 7th, 2019

By: David Cole & Amy Bender

Tales of data breaches flood our news reports these days. By now, you hopefully are aware that all 50 states have laws requiring persons and organizations that own or maintain computerized data that includes personal information to notify affected individuals, and sometimes the government, in the event of a data breach involving their personal information. (You know those letters you’ve received from hospitals, retail stores, and other companies advising you that they experienced a data breach that may have exposed your personal information? They didn’t notify you out of the goodness of their hearts – it’s the law!)

In the past, these laws have focused solely on notifying affected individuals about compromises to their personal information. Outside of specific industries, such as healthcare or financial services, which are regulated by laws applicable only to them, such as HIPAA and the Gramm-Leach- Bliley Act, respectively, there have not been laws of general applicability regulating the standard of care required for protecting personal information in the first place. Recently, however, a trend has emerged among state legislatures to take this next step in cybersecurity legislation by setting standards for businesses’ protection of consumers’ personal information.

The majority of states now have enacted data security and/or data disposal laws that place affirmative obligations on entities (or, in some instances, certain types of industries) that own or use computer data containing personal information to safeguard and/or dispose of or encrypt that data. Below is a current list of states that have adopted these laws:

(Click here for our discussion of the significant and comprehensive data security law California passed last year.)

Unfortunately, there is not one universal standard for how to secure and destroy data containing personal information, but rather, the standard varies by state. Organizations that operate in multiple states thus may have to comply with multiple and differing requirements. In addition, many of these laws only provide general, and often vague, guidelines that do not specify particular technologies or data security measures that should be implemented. For instance, many laws only require that businesses implement “reasonable” administrative, physical, and/or technical safeguards to protect personal information from unauthorized use or disclosure, and then describe “reasonable” measures as those “appropriate based on the size of the business and the nature of information maintained.” That may be clear as mud, but at least it’s a start and enough to put businesses on notice that doing nothing is not an option.

For these reasons, we recommend that businesses work with legal counsel to understand the laws of the states where they do business and to conduct a security risk assessment to evaluate the information they maintain, the potential risks to it, and the current measures in place to protect it. Working with legal counsel, businesses should then work with an experienced cybersecurity provider to translate that risk assessment into an actionable plan for improving data security and privacy within their organization. The legal standards still might be vague, but going through a process like this will put businesses in the best position to demonstrate good faith and reasonable efforts to meet their legal obligations if and when an incident occurs or a claim is made by a third party.

Please contact David Cole, Amy Bender, or one of the other members of our Data Security, Privacy & Technology team at FMG for additional questions or to discuss conducting a risk assessment for your organization.

New Task Force Aims to Reform California’s Technological Ethical Rules

Posted on: January 15th, 2019

By: Paige Pembrook

On December 5, 2018, the California State Bar Task Force on Access Through Innovation in Legal Services held its first meeting and started a long process to modernize ethical rules that currently inhibit lawyers from fully using innovative technologies and services from non-lawyer businesses. Under the Current Rules of Professional Conduct for California lawyers, attorneys risk professional discipline and malpractice liability when using services and software offered by non-lawyer technology businesses, even though those services and software offer significant potential to improve access to and delivery of legal services.

Earlier this year, the State Bar charged the Task Force with recommending rule modifications to allow collaboration and technological innovation in legal services, including use of artificial intelligence and online legal service delivery models. The Task Force is specifically tasked with scrutinizing existing rules and regulations concerning the unauthorized practice of law, lawyer advertising and solicitation, partnerships with non-lawyers, fee splitting, and referral compensation. The Task Force must submit its recommendations to the State Bar Board of Trustees before December 31, 2019.

As any effective rule changes remain years away, lawyers must be aware of and comply with the current rules that restrict lawyers seeking to collaborate with and use technology from non-lawyer businesses. The Rules of Professional Conduct are often implicated when lawyers collaborate with non-lawyer businesses offering technology-driven legal services and software. These rules include those premised on harm to clients that flows from incompetent legal service (Rule 1.1), non-lawyer ownership of law offices and the unauthorized practice of law (Rules 5.4 and 5.5), and the dissemination of biased and/or misleading information (Rules 7.1-7.3).

To the extent that lawyers violate any of the aforementioned rules by using technology-driven legal services and software offered by non-lawyer businesses, they may be subject to State Bar discipline.

If you have any questions or would like more information, please contact Paige Pembrook at [email protected].

Many Drivers Don’t Appreciate Limitations of Driver Assistance Technologies

Posted on: September 28th, 2018

By: Wes Jackson

Pump the breaks, George Jetson! While car technology is quickly advancing towards autonomous vehicles, we aren’t there yet. Even so, a recent study from the AAA Foundation for Traffic Safety suggests many drivers overestimate the abilities of new driver assistance technologies, which could lead to unsafe driving habits.

The study examined drivers’ attitudes toward and interactions with “advanced driver assistance systems,” or ADAS. Anyone who has recently purchased a new car is likely familiar with many of the latest ADAS technologies such as forward collision warning, automatic emergency breaking, lane departure warning, lane keeping assist, blind spot monitoring, rear cross-traffic alert, and adaptive cruise control.

While the study found that most drivers trusted and used these ADAS features, it also revealed that most drivers do not appreciate their limitations. For example, only 21% of owners of vehicles with blind spot monitoring knew that such systems could not detect vehicles passing at a high rate of speed. Similarly, only a third of owners of vehicles with automatic breaking systems knew the systems relied on cameras and sensors that could be compromised by dirt or other debris.

What’s worse, some drivers with ADAS systems admitted to adopting unsafe driving habits in response to the new technologies. For instance, 29% of respondents to the study reported feeling comfortable engaging in other activities while using adaptive cruise control. Similarly, 30% of respondents admitted to relying exclusively on their blind spot monitoring system without checking their blind spots, and 25% of respondents admitted to backing up without looking over their shoulder when using a rear cross-traffic alert system.

These new ADAS technologies can certainly help motorists driver more safely. However, drivers should not succumb to the illusion that these new technologies made alert driving a thing of the past. Until we’re all flying around in autonomous space-age vehicles, be sure to keep your eyes on the road and always look twice before backing up or changing lanes.

The Transportation Law Team at Freeman Mathis & Gary, LLP is on the cutting edge of autonomous vehicle issues. If you have any questions about the AAA Foundation’s report or issues concerning autonomous vehicles, please contact Wes Jackson at [email protected].

If You Don’t Have Anything Nice To Say….You Probably Shouldn’t Post It!

Posted on: August 22nd, 2018

By: Shaun DaughertySamantha Skolnick

Mothers all over the world have admonished their children: “if you don’t have anything nice to say, don’t say anything at all.”  It may lose something when translated into some obscure dialects, but the sentiment was still there.  Now that we live in the age of technology, it appears that the old saying could use a facelift.  “If you don’t have anything nice to say, you should not type it anywhere on the internet.”  That is especially true if you are criticizing doctors and hospitals.

A wave of litigation has been emerging involving doctors and hospitals, but in these instances, they are not the targets, they are the plaintiffs.  Doctors and hospitals are starting to sue their patients for negative reviews on social media. The most recent example earned itself an article in USA Today where retired Colonel David Antoon had to pay $100 to settle felony charges for emailing his surgeon articles that the doctor found threatening as well as posting a list on Yelp of the surgeries the urologist had scheduled for the same time as his own.  Antoon alleged that his surgery left him incontinent and impotent and he had tried to appeal to the court of public opinion.

In other news, a Cleveland physician sued a former patient for defamation after the negative internet reviews of her doctor reached the level of deliberately false and defamatory statements. The case may be headed to trial in August. Close by, a Michigan hospital sued three relatives for Facebook posts and picketing which amounted to defamation, tortious interference and invasion of privacy. The family claimed that the hospital had mistreated their deceased grandmother.

We live in a country that ensures freedom of speech, and that right is exercised more than ever with the advent of social media and an ever-growing audience of participants.  However, there can be consequences if the speech is inaccurate or defamatory in nature.  While some attorneys, like Steve Hyman, cite the law in stating that “[t]ruth is an absolute defense. If you do that and don’t make a broader conclusion that they’re running a scam factory then you can write a truthful review that ‘I had a bad time with this doctor.’”  Other commentators, like Evan Mascagni from the Public Participation Project, tout avoiding broad generalizations, “If you’re going to make a factual assertion, be able to back that up and prove that fact.” That is defense against defamation claims 101.

The world of non-confrontational criticism on social medial makes it easy and tempting to post an emotionally fueled rant.  But beware!  You want to avoid a situation like that of Michelle Levine who has spent nearly $20,000 defending herself against a suit filed by her Gynecologist over defamation, libel, and emotional distress. The 24-hour rule is still a viable alternative to hitting “send” or “post.”  Type it out, let it sit and ruminate for a bit, and then decided if you are going to post the negative comments for the world to see.  Some opinions are worth sharing, or you may decide…. don’t say anything at all.

If you have any questions or would like more information please contact Shaun Daugherty at [email protected] or Samantha Skolnick at [email protected].

Smart Cities Face Hacking Threat

Posted on: August 15th, 2018

By: Ze’eva Kushner

As you sit in traffic, frustrated and wondering why the city or municipality cannot do something to ease congestion, know that a city’s use of internet-connected technology to make your commute better may also invite hackers to wreak havoc on your city.

Traffic is just one of many problems that “smart cities” use internet-connected technology to address.  A smart city can set up an array of sensors and integrate their data to monitor things like air quality, water levels, radiation, and the electrical grid.  That data then can be used to automatically inform fundamental services like traffic and street lights and emergency alerts.

Smart city technology provides many benefits to city management, including connectivity and ease of management.  However, these very same features make the technology an attractive target for hackers.  In a recently released white paper, IBM revealed 17 vulnerabilities in smart city systems around the world.  Some of these risks were as simple as failing to change default passwords that could be guessed easily, bugs that could allow an attacker to inject malicious software commands, and others that would allow an attacker to sidestep authentication checks.  Additionally, use of the open internet rather than an internal city network to connect sensors or relay data to the cloud presents an opportunity for hackers.

Atlanta is an example of a smart city that is attempting to improve its efficiency by employing smart city technology, with its focus being mobility, public safety, environment, city operations efficiency, and public and business engagement.  Atlanta knows all too well how crippling a hack can be, as it suffered from the ransomware attack in the Spring that kept residents from services such as paying their water bills or traffic tickets online.  The hacking threat to smart cities is real and significant.

If you have any questions or would like more information, please contact Ze’eva Kushner at [email protected].