Electronic Frontier Foundation

Visit Website
Description Nonprofit defending digital privacy, free speech, and innovation.
0/5 based on 0 votes.
Feed Data
  • EFF’s Top Recommendations for the Biden Administration by on Fri, 22 Jan 2021 00:16:32 +0000:
    At noon on January 20, 2021, Joseph R. Biden, Jr. was sworn in as the 46th President of the United States, and he and his staff took over the business of running the country.

    The tradition of a peaceful transfer of power is as old as the United States itself. But by the time most of us see this transition on January 20th, it is mostly ceremonial. The real work of a transition begins months before, usually even before Election Day, when presidential candidates start thinking about key hires, policy goals, and legislative challenges. After the election, the [url=]Presidential Transition Act[/url] provides the president-elect’s team with government resources to lay the foundation for the new Administration’s early days in office. Long before the inauguration ceremony, the president-elect’s team also organizes meetings with community leaders, activists, and non-profits like EFF during this time, to hear about our priorities for the incoming Administration.

    In anticipation of these meetings, EFF prepared a [url=]transition memo[/url] for the incoming Biden administration, outlining our recommendations for how it should act to protect everyone’s civil liberties in a digital world. While we hope to work with the new Administration on a wide range of policies that affect digital rights in the coming years, this memo focuses on the areas that need the Administration’s immediate attention. In many cases, we ask that the Biden Administration change course from the previous policies and practices.

    We look forward to working with the new Biden Administration and the new Congress to implement these important ideas and safeguards.

    Read EFF’s [url=]transition memo[/url].
  • Political Satire Is Protected Speech – Even If You Don’t Get the Joke by on Wed, 20 Jan 2021 23:34:22 +0000:
    [i]This blog post was co-written by EFF Legal Fellow [url=]Houston Davidson[/url].[/i]

    Should an obviously fake Facebook post—one made as political satire—end with a lawsuit and a bill to pay for a police response to the post? Of course not, and that’s why EFF filed an [url=]amicus brief[/url] in [i]Lafayette City v. John Merrifield[/i].

    In this case, Merrifield made an obviously fake Facebook event satirizing right-wing hysteria about Antifa. The announcement specifically poked fun at the well-established genre of fake Antifa social media activity, used by some to drum up anti-Antifa sentiment. However, the mayor of Lafayette didn’t get the joke, and now Lafayette City officials want Merrifield to pay for the costs of policing the fake event.

    In EFF’s amicus brief filed in support of Merrifield in the Louisiana Court of Appeal, we trace the rise and proliferation of obviously parodic fake events. These events range from fake concerts (“[url=]Drake live at the Cheesecake Factory[/url]”) to quixotic events designed to forestall natural disasters (“[url=]Blow Your Saxophone at Hurricane Florence[/url]”) to fake destruction of local monuments (“Stone Mountain Implosion”). This kind of fake event is a form of online speech. It’s a crucial form of social commentary whether it makes people laugh, builds resilience in the face of absurdity, or criticizes the powerful.

    EFF makes a two-pronged legal argument. First, the First Amendment clearly protects the kind of satirical speech that Merrifield has made. Political satire and other parodic forms are an American tradition that goes back to the country’s founding. The amicus brief explains that [i]“[/i]Facetious speech may be frivolously funny, sharply political, and everything in between, and it is all fully protected by the First Amendment, even when not everybody finds it humorous[i].”[/i] Even when the speech offends, it may still claim constitutional protection. While the State of Louisiana may not have seen the parody, the First Amendment still applies. Second, parodic speech, by its very definition, has no intent to cause the specific serious harm that the charge of incitement or similar criminal liabilities requires. After all, it was meant to make a point about online misinformation.

    We hope the appeals court follows the law, and rejects the state’s case.
  • Blyncsy’s Patent On Contact Tracing Isn’t A Medical Breakthrough, It’s A Patent Breakdown by on Wed, 20 Jan 2021 21:21:11 +0000:
    Stupid Patent of the Month

    Contact tracing is critical for limiting the spread of a contagion like COVID-19, but that doesn’t mean it’s inventive to compare people’s locations using their smartphones. Rather, it’s all the more important to protect the basic methods of public health from bogus patent claims.

    The CDC [url=]recommends[/url] contact tracing for close contacts of any “confirmed or probable” COVID-19 patients. But doing the job has been left to state and local public health officials. 

    In addition to traditional contact tracing by public health workers, some are using “proximity apps” to track when people are near one another. These apps [url=]should be used only with careful respect for user privacy[/url]. And they can’t be a substitute for contact tracing done by public health workers. But, if used carefully, such apps could be a useful complement to traditional contact tracing.  

    Unless someone with an absurdly broad patent stops them.

    In April, while many states were still under the first set of shelter-in-place orders, a Utah company called Blyncsy was building a website to let government agencies know that it wanted to get paid—not for giving them any product, but simply for owning a patent they’d managed to get issued in 2019. In [url=]news[/url] [url=]reports[/url] and its own press release, Blyncsy said that anyone doing cellphone-based contact tracing would need to license its patent, U.S. Patent No. [url=]10,198,779[/url], “Tracking proximity relationships and uses thereof.” 

    By September, Blyncsy had begun [url=]demanding $1 per resident payments[/url] from states that released contact tracing apps, including Pennsylvania, North Dakota, South Dakota, and Virginia.  

    “State governments have taken it upon themselves to roll out a solution in their name in which they’re using our property without compensation,” Blyncsy CEO Mark Pittman told Wired. 

    The developer behind North Dakota and Wyoming’s contact tracing apps told Wired that Blyncsy’s threatened patent battle could “put a freeze on new states rolling out apps.” 

    Given Blyncsy’s $1 per person price point, a state like Virginia, with more than 8 million residents, would be vastly overpaying for contact tracing technology. That state’s app cost $229,000 to develop.  

    [b]Contact Tracing is Very Good And Very Old[/b]  

    Unfortunately, our patent system encourages the acquisition and use of bad patents like the one owned by Blyncsy. The company’s patent essentially claims technology that’s well over a century old, and could be performed with pencil and paper. Simply adding a smartphone into the works—a “mobile computing device,” as Blyncsy’s patent describes it—doesn’t make the patent valid.  

    And the patent’s primary claim doesn’t have any technology in it at all. It’s a pure “business method” patent, except that the only business it appears to support is demanding money from state public health departments. Simplifying the language of Claim 1, it describes: 

    Receiving data about the location of a first person, who “has a contagion” 
    Receiving data about the location of a second person
    Determining if they’re close together 
    Determining when they were close together
    Determining whether the “proximity relationship” is positive or negative 
    Labeling the second person as being either contaminated or not 

    That’s just contact tracing, a process that has been done with simple writing and human interviews since at least the mid-19th century. In a famous breakthrough for the field of epidemiology, British doctor John Snow [url=]traced an 1854 cholera epidemic[/url] near London to a specific water pump on Broad Street. 

    Patent documents have even more specific descriptions of Blyncsy’s claimed method. Last week, Unified Patents [url=]published some prior art for the Blyncsy patent[/url], including a 2003 patent application directed to an “electronic proximity apparatus” that alerts individuals when they are exposed to a contagious disease.

    Doing research, writing down the results, and trying to figure out how people got sick isn’t a patentable invention—it’s just basic science. The patent’s other claims are no more impressive. The issuance of the Blyncsy patent isn’t a success story, it’s evidence of glaring flaws in our patent system.  

    Too often, patents are issued that don’t embody any innovation. These simply become a 20-year-long invitation for a company to demand payment for the innovation and work of others. Sometimes that’s other companies, and sometimes it’s our own governments. In this case, Blyncsy is trying to make a buck by levying what amounts to a tax—but one that only benefits a single, for-profit company. 

    Unfortunately, we’ve seen patent holders repeatedly try to take advantage of the pandemic. Last year, patent troll Labrador Diagnostics tried to use [url=]old patents belonging to the notorious firm Theranos[/url] to sue a real firm making COVID-19 tests. Another patent troll [url=]sued a ventilator manufacturer[/url]. A health crisis is a time to carefully limit, not expand, the power of patents. 
  • Oakland’s Progressive Fight to Protect Residents from Government Surveillance by on Wed, 20 Jan 2021 19:26:57 +0000:
    The City of Oakland, California, has once again raised the bar on community control of police surveillance. Last week, Oakland's City Council voted unanimously to strengthen the city's already [url=]groundbreaking[/url] Surveillance and Community Safety Ordinance. The latest [url=]amendment[/url], which immediately went into effect, adds prohibitions on Oakland's Police Department using [url=]predictive policing technology—[/url]which has been shown to amplify existing bias in policing—as well as a range of privacy-invasive biometric surveillance technologies, to the city’s existing ban on government use of face recognition.

    Oakland is now (as far as we know) the first city to ban government use of voice recognition technology. This is an important step, given the growing proliferation of this invasive form of surveillance, at [url=]home[/url] and [url=]abroad[/url]. Oakland also may be the first city to ban government use of any technology that can identify a person based on “physiological, biological, or behavioral characteristics ascertained from a distance.” This would include recognition of a person’s distinctive [url=]gait[/url] or manner of walking.

    Last June, the California city of Santa Cruz became the [url=]first U.S. city[/url] to ban its police department from using predictive policing technology. Just a few weeks ago, [url=]New Orleans[/url], Louisiana, also prohibited the technology, as well as certain forms of biometric surveillance. With last week's amendments, Oakland became the largest city to ban predictive policing.

    Oakland is also the first city to incorporate these prohibitions into a more comprehensive Community Control of Police Surveillance ([url=]CCOPS[/url]) framework. Its ordinance not only prohibits these particularly pernicious forms of surveillance, but also ensures that police cannot obtain any other forms of surveillance technology absent permission from the city council following input from residents. This guarantees transparency, accountability, and community engagement before any privacy-invasive surveillance technology can be adopted by local government agencies.

    Last week marks the second significant update to Oakland's CCOPS ordinance since it went into effect in 2018. One year later, the law was amended to ban government use of face recognition. At the time, Oakland was one of only three cities to ban the technology. Neighboring [url=]San Francisco[/url] was the first, having included its ban in its own CCOPS ordinance. Since then, over a dozen U.S. cities have followed suit. Now [url=]federal legislation[/url] seeks to bar federal agencies from using the technology. The bill is sponsored by Senators Markey and Merkley and Representatives Ayanna Pressley, Pramila Jayapal, Rashida Tlaib, and Yvette Clarke.

    The recent amendments to Oakland’s surveillance ordinance were sponsored by the city’s [url=]Privacy Advisory Commission[/url], which is tasked with developing and advising on citywide privacy concerns. In a [url=]blog[/url] published by Secure Justice—a Bay Area non-profit lead by the Commission's chair, Brian Hofer—Sameena Usman, of the San Francisco office of the Council on American-Islamic Relations ([url=]CAIR[/url]) explained: "Not only are these methods intrusive and don't work, they also have a disproportionate impact on Black and brown communities – leading to over-policing." EFF joined 16 other groups in [url=]supporting[/url] the new law.

    EFF applauds the Commission and Oakland's City Council for taking this groundbreaking step advancing public safety and protecting civil liberties. Predictive policing and biometric surveillance technologies have a harmfully disparate impact against people of color, immigrants, and other vulnerable populations, as well as a chilling effect on our fundamental First Amendment freedoms.  
  • Why EFF Doesn’t Support Bans On Private Use of Face Recognition by on Wed, 20 Jan 2021 16:29:33 +0000:
    Government and private use of face recognition technology each present a wealth of concerns. Privacy, safety, and amplification of carceral bias are just some of the reasons why we must ban government use.

    But what about private use? It also can exacerbate injustice, including its use by police contractors, [url=]retail[/url] establishments, business improvement districts, and [url=]homeowners[/url].

    Still, EFF does not support banning private use of the technology. Some users may choose to deploy it to lock their mobile devices or to demonstrate the hazards of the tech in government hands. So instead of a prohibition on private use, we support strict laws to ensure that each of us is empowered to choose if and by whom our faceprints may be collected. This requires mandating informed opt-in consent and data minimization, enforceable through a robust private right of action.

    Illinois has had such [url=]a[/url] [url=]law[/url] for more than a decade. This approach properly balances the human right to control technology—both to use and develop it, and to be free from other people’s use of it.

    The menace of all face recognition technology

    Face recognition technology requires us to confront complicated questions in the context of centuries-long racism and oppression within and beyond the criminal system. 

    Eighteenth-century “[url=]lantern laws[/url]” requiring Black and indigenous people to carry lanterns to illuminate themselves at night are one of the earliest examples of a useful technology twisted into a force multiplier of oppression. Today, face recognition technology has the power of covert bulk collection of biometric data that was inconceivable less than a generation ago.

    Unlike our driver’s licenses, credit card numbers, or even our names, we cannot easily change our faces. So once our biometric data is captured and stored as a face template, it is largely indelible. Furthermore, as a CBP vendor found out when the face images of approximately [url=]184,000[/url] travelers were stolen, databases of biometric information are ripe targets for data thieves.

    Face recognition technology chills fundamental freedoms. As early as 2015, the Baltimore police used it to target [url=]protesters against police violence[/url]. Its[url=] threat to essential liberties[/url] extends far beyond political rallies. Images captured outside houses of worship, medical facilities, community centers, or our homes can be used to infer our familial, political, religious, and sexual relationships. 

    Police have unparalleled discretion to use violence and intrude on liberties. From our nation’s inception, essential freedoms have not been equally available to all, and the discretion vested in law enforcement only accentuates these disparities. One need look no further than the [url=]Edward Pettus Bridge[/url] or the [url=]Church Committee[/url].  In 2020, as some people took to the streets to protest police violence against Black people, and others came out to protest COVID-19 mask mandates, police enforced  their authority in [url=]starkly[/url] [url=]different[/url] manners. 

    Face recognition amplifies these police powers and aggravates these racial disparities.

    Each step of the way, the private sector has contributed. Microsoft, Amazon, and IBM are among the many vendors that have built face recognition for police (though in response to [url=]popular pressure[/url] they have temporarily ceased doing so). Clearview AI continues to process[url=] faceprints[/url] of billions of people without their consent in order to help police identify suspects. Such companies ignore the human right to make informed choices over the collection and use of biometric data, and join law enforcement in exacerbating long-established inequities.

    The fight to end government use of face recognition technology 

    In the hands of government, face recognition technology is simply too threatening. The legislative solution is clear: ban government use of this technology.

    EFF has fought to do so, alongside our members, [url=]Electronic Frontier Alliance allies[/url], other national groups like the ACLU, and a broad range of concerned community-based groups. We supported the effort to pass San Francisco’s [url=]historic 2018 ordinance[/url]. We’ve also advocated before city councils in California ([url=]Berkeley[/url] and [url=]Oakland[/url]), Massachusetts ([url=]Boston[/url], [url=]Brookline[/url], and [url=]Somerville[/url]), and Oregon ([url=]Portland[/url]).

    Likewise, we helped pass California’s [url=]AB 1215[/url]. It establishes a three-year moratorium on the use of face recognition technology with body worn police cameras. AB 1215 is the nation’s first state-level face recognition prohibition. We also joined with the ACLU and others in demanding that manufacturers [url=]stop selling the technology to law enforcement[/url]. 

    In 2019, we launched our [url=]About Face[/url] campaign. The campaign supplies [url=]model language[/url] for an effective ban on government use of face recognition, tools to prepare for meeting with local officials, and a library of existing ordinances. Members of EFF’s organizing team also work with community-based members of the Electronic Frontier Alliance to build local coalitions, deliver public comment at City Council hearings, and build awareness in their respective communities. 

    The promise of emerging technologies

    EFF works to ensure that technology supports [url=]freedom, justice, and innovation[/url] for all the people of the world. In our experience, when government tries to take technology out of the hands of the public, it is often serving the interests of the strong against the weak. Thus, while it is clear that we must ban government use of face recognition technology, we support a different approach to private use.

    As [url=]Malkia Cyril[/url] wrote in a 2019 piece for McSweeney’s [url=][i]The End of Trust[/i][/url] titled “[url=]Watching the Black Body[/url]”: “Our twenty-first-century digital environment offers Black communities a constant pendulum swing between promise and peril.” High-tech profiling, policing, and punishment exacerbate the violence of America’s race-based caste system. Still, other technologies have facilitated the development of intersectional and [url=]Black-led[/url] movements for change, including encrypted communication, greater access to video recording, and the ability to [url=]circumvent the gatekeepers[/url] of traditional media.

    [url=]Encryption[/url] and [url=]Tor[/url] secure our private communications from snooping by stalkers and foreign nations. Our [url=]camera[/url] [url=]phones[/url] help us to document police misconduct against our neighbors. [url=]The Internet[/url] allows everyone to (in the words of the U.S. Supreme Court) “become a town crier with a voice that resonates farther than it could from any soapbox.” [url=]Digital currency[/url] holds the promise of greater financial privacy and independence from financial censorship.

    EFF has long resisted government efforts to ban or hobble the development of such novel technologies. We stand with [url=]police observers[/url] against wrongful arrest for recording on-duty officers, with [url=]sex workers[/url] against laws that stop them from using the Internet to share safety information with peers, and with [url=]cryptographers[/url] against FBI attempts to weaken encryption. Emerging technologies are for everyone.

    Private sector uses of face recognition

    There are many menacing ways for private entities to use face recognition. [url=]Clearview AI[/url], for example, uses it to help police departments identify and arrest protesters.

    It does not follow that all private use of face recognition technology undermines human rights. For example, many people use face recognition [url=]to[/url] [url=]lock[/url] their smartphones. While passwords provide [url=]stronger[/url] [url=]protection[/url], without this option some users might not secure their devices at all.

    Research and anti-surveillance scholars have made good use of the technology. 2018 saw [url=]groundbreaking work[/url] from then-MIT researcher Joy Boulamwini and Dr. Timnit Gebru, exposing face recognition’s algorithmic bias. That same year, the ACLU utilized Amazon’s Rekognition system to demonstrate the disparate inefficacy of a technology that disproportionately misidentified congressional representatives of color as individuals in publicly accessible arrest photos. U.S. Sen. Edward Markey (D-MA), one of the lawmakers erroneously matched despite being part of the demographic the technology identifies most accurately, went on to introduce the [url=]Facial Recognition and Biometric Technology Moratorium Act of 2020[/url]. 

    These salutary efforts would be illegal in a state or city that banned private use of face recognition. Even a site-specific ban—for example, just in public forums like parks, or public accommodations like cafes—would limit anti-surveillance activism in those sites. For example, it would bar a live demonstration at a rally or lecture of how face recognition works on a volunteer.

    Strict limits on private use of face recognition

    While EFF does not support a ban on private use of face recognition, we do support strict limits. Specifically, laws should subject private parties to the following rules:

    Do not collect a faceprint from a person without their prior written opt-in consent.
    Likewise, do not disclose a person’s faceprint without their consent.
    Do not retain a faceprint after the purpose of collection is satisfied, or after a fixed period of time, whichever is sooner.
    Do not sell faceprints, period.
    Securely store faceprints to prevent their theft or misuse.

    All privacy laws need effective enforcement. Most importantly, a person who discovers that their privacy rights are being violated must have a “[url=]private right of action[/url]” so they can sue the rule-breaker.

    In 2008, Illinois enacted the first law of this kind: the Illinois Biometric Information Privacy Act ([url=]BIPA[/url]). A new [url=]federal[/url] [url=]bill[/url] sponsored by U.S. Senators Jeff Merkley and Bernie Sanders seeks to implement BIPA on a nationwide basis.

    We’d like this kind of law to contain two additional limits. First, private entities should be prohibited from collecting, using, or disclosing a person’s faceprint except as necessary to give them something they asked for. This privacy principle is [url=]often[/url] [url=]called[/url] “[url=]minimization[/url].” Second, if a person refuses to consent to faceprinting, a private entity should be prohibited from retaliating against them by, for example, charging a higher price or making them stand in a longer line. EFF opposes “[url=]pay[/url] [url=]for[/url] [url=]privacy[/url]” schemes that pressure everyone to surrender their privacy rights, and contribute to an income-based society of privacy “haves” and “have nots.” Such laws should also have an exemption for [url=]newsworthy[/url] information.

    EFF has long worked to enact more BIPA-type laws, including in [url=]Congress[/url] and [url=]Montana[/url]. We regularly advocate in Illinois to [url=]protect[/url] [url=]BIPA[/url] [url=]from[/url] [url=]legislative[/url] [url=]backsliding[/url]. We have filed amicus briefs in a [url=]federal appellate court[/url] and the [url=]Illinois Supreme Court[/url] to ensure strong enforcement of BIPA, and in an [url=]Illinois trial court[/url] against an ill-conceived challenge to the law’s constitutionality.

    Illinois’ BIPA enjoys robust enforcement. For example, Facebook agreed to pay its users [url=]$650 million[/url] to settle a case challenging non-consensual faceprinting under the company’s “tag suggestions” feature. Also, the ACLU recently sued [url=]Clearview AI[/url] for its non-consensual faceprinting of billions of people. Illinoisans have filed dozens of other BIPA suits.

    Government entanglement in private use of face recognition

    EFF supports one rule for government use of face surveillance (ban it) and another for private use (strictly limit it). What happens when government use and private use overlap?

    Some government agencies purchase faceprints or faceprinting services from private entities. Government should be banned from doing so. Indeed, most bans on government use of face recognition [url=]properly[/url] [url=]include[/url] [url=]a[/url] [url=]ban[/url] on government obtaining or using information derived from a face surveillance system.

    Some government agencies own public forums, like parks and sidewalks, that private entities use for their own expressive activities, like protests and festivals. If community members wish to incorporate consensual demonstrations of facial recognition into their expressive activity, subject to the strict limits above, they should not be deprived of the right to do so in the public square. Thus, when a draft of Boston’s [url=]recently enacted[/url] ban on government use of face recognition also prohibited issuance of permits to third parties to use the technology, EFF [url=]requested[/url] and [url=]received[/url] language limiting this to when third parties are acting on behalf of the city.

    Some government agencies own large restricted spaces, like airports, and grant other entities long-term permission to operate there. EFF opposes [url=]government[/url] [url=]use[/url] [url=]of[/url] [url=]face[/url] [url=]recognition[/url] [url=]in[/url] [url=]airports[/url]. This includes by the airport authority itself; by other government agencies operating inside the airport, including federal, state, and local law enforcement; and by private security contractors. This also includes a ban on government access to information derived from private use of face recognition. On the other hand, EFF does not support a ban on private entities making their own use of face recognition inside airports, subject to the strict limits above. This includes use by travelers and airport retailers. Some anti-surveillance advocates have sought [url=]more expansive bans[/url] on private use.

    Private use of face recognition in places of public accommodation

    [url=]A[/url] [url=]growing[/url] [url=]number[/url] [url=]of[/url] [url=]retail[/url] [url=]stores[/url] and other places of public accommodation are deploying face recognition to identify all people entering the establishment. EFF strongly opposes this practice. When used for theft prevention, it has a racially disparate impact: the photo libraries of supposedly “suspicious” people are often based upon unfair encounters between people of color and law enforcement or corporate security. When used to micro-target sales to customers, this practice intrudes on privacy: the photo libraries of consumers, correlated to their supposed preferences, are based on corporate surveillance of online and offline activity.

    The best way to solve this problem is with the strict limits above. For example, the requirement of written opt-in consent would bar a store from conducting face recognition on everyone who walks in.

    On the other hand, EFF does not support bans on private use of face recognition within public accommodations. The anti-surveillance renter of an assembly hall, or the anti-surveillance owner of a café, might wish to conduct a public demonstration of how face recognition works, to persuade an audience to support legislation that bans government use of face recognition. Thus, while EFF [url=]supported[/url] the recently enacted ban on [url=]government use[/url] of face recognition in Portland, Oregon, we did not support that city’s simultaneously enacted ban on [url=]private use[/url] of face recognition within public accommodations.

    U.S. businesses selling face recognition to authoritarian regimes 

    Corporations based in the United States have a [url=]long[/url] [url=]history[/url] of selling key surveillance technology, used to aid in oppression, to authoritarian governments. This has harmed journalists, human rights defenders, ordinary citizens, and democracy advocates. The Chinese tech corporation Huawei reportedly is building face recognition software to notify government authorities when a camera identifies a [url=]Uyghur[/url] person. There is risk that, without effective accountability measures, U.S. corporations will sell face recognition technology to authoritarian governments.

    In collaboration with leading organizations tracking the sale of surveillance technology, we recently filed a [url=]brief[/url] asking the U.S. [url=]Supreme Court[/url] to uphold the ability of non-U.S. residents to sue U.S. corporations that aid and abet human rights abuses abroad. The case concerns the [url=]Alien Torts Statute[/url], which allows foreign nationals to hold U.S.-based corporations accountable for their complicity in human rights violations.

    To prevent such harms before they occur, businesses selling surveillance technologies to governments should implement a “[url=]Know Your Customer[/url][url=]” framework of [/url][url=]a[/url][url=]ffirmative investigation[/url] before any sale is executed or service provided. Significant portions of this framework are already required through the [url=]Foreign Corrupt Practices Act[/url] and [url=]export regulations[/url][url=].[/url] Given the inherent risks associated with face recognition, U.S. companies should be especially wary of selling this technology to foreign states.

    Fight back

    EFF will continue to advocate for bans on [url=]government[/url] [url=]use[/url] of face recognition, and adoption of [url=]legislation[/url] that protects each of us from [url=]non-consensual collection[/url] of our biometric information by private entities. You can join us in this fight by working with others [url=]in your area[/url] to advocate for stronger protections and signing our About Face [url=]petition[/url] to end face government use of face recognition technology.
Views 23 views, 0 incoming clicks. Averaging 0 views and 0 incoming clicks per day.
Submission Date Oct 07, 2018 (Edited Nov 23, 2018)

Members currently reading this thread:

- Contact the Administration - Site Map - RSS Feed - Help/How to Submit URL - Link to Us -