Facebook has become so deeply ingrained in people’s lives that it has now become the norm to give it access to personal data without much thought, as if this is but a small price to pay for Facebook’s “free” service. But nothing could be further from the truth.
These traceable and sellable data now give Facebook the power to manipulate what we do, how we feel, what we buy and what we believe. The consequences of giving Facebook this much power is only becoming apparent, with mounting lawsuits against their security breaches and lousy privacy settings.
Even CrossFit, the well-established branded fitness regimen, has decided to stop supporting Facebook and its associated services, putting all their activities on Facebook and Instagram to a halt starting May 22, 2019. This decision came in the wake of Facebook’s deletion of the Banting7DayMealPlan user group, which was done without warning or explanation. The group has more than 1.65 million members who post testimonials regarding the efficiency of a low-carb, high-fat diet.
Although the group was later reinstated, Facebook’s action still shows how it acts in the interest of the food and beverage industry. You see, big advertisers on Facebook, like Coca-Cola, don’t want you to have access to this information, and Facebook is more than happy to ban anyone challenging the industrial food system. By doing this, it potentially contributes to the global chronic disease crisis.
Would you continue trusting a company that thinks too little of violating your rights to privacy?
1Facebook’s Primary ‘Product’ Is You
If you think Facebook’s product is the very platform that users interact with, you’re wrong. You are actually Facebook’s primary product. The site makes money off you by meticulously tracking your hobbies, habits and preferences through your “likes,” posts, comments, private messages, friends list, login locations and more. It sells these data, along with your personal information, to whomever wants access to them, potentially facilitating everything from targeted advertising to targeted fraud — this is its entire profit model.
Did you know that it can even access your computer or smartphone’s microphone without your knowledge? So if you’re suddenly receiving ads for products or services that you just spoke out loud about, don’t be surprised — chances are one or more apps linked to your microphone have been eavesdropping on you. These privacy intrusions can continue even after you’ve closed your Facebook account.
Companies can also collect information about the websites you’re visiting or the keywords you’re searching for outside of Facebook’s platform without your permission, and then sell these data to Facebook so it knows which ads to show you. This makes Facebook the most infamous advertising tool ever created, and to increase revenue, it has to continue spying on you.
During Facebook’s early days, its founder, Mark Zuckerberg, assured in an interview that no user information would be sold or shared with anyone the user had not specifically given permission to. However, the site’s blatant disregard for its users’ privacy proves otherwise. In fact, Facebook has been repeatedly caught mishandling user data and lying about their data harvesting, resulting in multiple legal problems.
The origin of Facebook is also far from altruistic, even though it’s said to be created “to make the world more open and connected,” and “give people the power to build community.” A front-runner to Facebook was a site called FaceMash, which was created to rate photos of women — photos that were obtained and used without permission. Some of the women were even compared to farm animals! This speaks volumes about Zuckerberg’s disrespect for privacy. Facebook is basically founded on a misogynistic hate group and it should therefore ban itself.
2Facebook Faces Investigation for Its Lax Security and Privacy Practices
Facebook is currently facing a number of lawsuits regarding its controversial data-sharing practices and poor security measures. Back in 2010, the U.S. Federal Trade Commission (FTC) revealed that Facebook was sharing user data with third-party software developers without the users’ consent, expressing concerns about the potential misuse of personal information, as Facebook does not track how third parties utilized them.
While Facebook agreed by consent order to “identify risk to personal privacy” and eliminate those risks, they did not actually pay attention to their security lapse. Had they done so, they would’ve been able to prevent the Cambridge Analytica scandal, the main focus of FTC’s first criminal probe. This issue involves Facebook’s deal with a British political consulting firm, allowing it access to around 87 million user data, which was used to influence public opinion in the U.S. presidential election.
Another criminal investigation into Facebook’s data sharing practice is underway. This time, it revolves around Facebook’s partnerships with tech companies and device makers, allowing them to override the users’ privacy settings and giving them broad access to its users’ information.
Amid federal criminal investigations, Zuckerberg announced the company’s latest plan to encrypt messages, so only the sender and the receiver will supposedly be able to decipher what they say. This is ironic, considering it was recently discovered that Facebook stored millions of user passwords in readable plaintext format in its internal platform, potentially compromising the security of millions of its users.
Zuckerberg has repeatedly demonstrated a complete lack of integrity when it comes to fulfilling his promises of privacy. In fact, in a 2010 talk given at the Crunchie awards, he stated that “privacy is no longer a social norm,” implying that using social media automatically strips you of the right to privacy, and that is why they do not respect it.
3Facebook Is a Monopoly
Facebook’s plan to integrate Instagram, Messenger and WhatsApp would turn it into a global super-monopoly. This merger has been criticized by tech experts, as it robs users of their ability to choose between messaging services, leaving them virtually no choice but to submit to Facebook’s invasive privacy settings. This also gives Facebook unprecedented data mining capabilities.
German antitrust regulator, Bundeskartellamt, is the first to prohibit Facebook’s unrestricted data mining, banning Facebook’s services in Germany if it integrates the three messaging platforms. If other countries follow suit, the merger would fall through, as it probably should.
One of the outspoken proponents of breaking up monopolies like Facebook, Google and Amazon is U.S. presidential candidate Sen. Elizabeth Warren, D-Mass. Her campaign to break up Facebook was censored by the site, taking down three of her ads with a message that said the ads went “against Facebook’s advertising policies.”
After Warren took to Twitter to comment how the censorship simply proves why her proposal was necessary, Facebook then reinstated her ads with the lame excuse that they were only removed because they included Facebook’s logo, which violates the site’s advertising policy.
I’ve Decided — I Am Leaving Facebook
At present, I have nearly 1.8 million Facebook followers, and I am grateful for the support. But a while back, I have expressed my concerns that perhaps I am doing more harm than good by being a part of Facebook, as I could be contributing to the invasive data mining, an idea that never sat well with me.
For those reasons, I decided that leaving the platform and going back to depending on email is the responsible way forward. If you haven’t subscribed to my newsletter yet, I urge you, your family and your friends to sign up now. I polled my audience and they agreed with my decision to leave.
Kids born in 2019 will be the most tracked humans in history. It’s predicted that by the time they turn eighteen, 70,000 posts about them will be in the internet ether. How and what you post about your child is a personal choice, but trusting that tech companies aren’t building dossiers on our children, starting with that first birth announcement, is a modern-day digital civil right we need to demand. As a mother myself, I want my children’s privacy to be a priority for tech makers.
I used to feel pretty lonely in that endeavor but over the last 12 months, I’ve noticed a trend: more and more people are talking about privacy. They’re calling out the companies that don’t take people’s online privacy seriously enough. They’re sharing articles detailing cover-ups and breaches. They’ve told me they want more privacy online and yet, feel trapped by the Terms of Service of the big platforms they need to use.
I think of this frustration as ‘digital wokeness’. And it’s the one good thing that came out of the Cambridge Analytica scandal. Though we’ve heard the reporting numerous times, let’s recall that from one personality quiz taken by 270,000 people, 87 million Facebook accounts were accessed. Tens of millions of people (maybe you) did not knowingly give permission for their information to be shared or manipulated by political operatives with questionable ethics.
We still don’t know exactly how this data collection and subsequent microtargeting of political content influenced our democratic process. But Cambridge Analytica is just one example. Everyday we hear about another undisclosed data breach. Private information being collected, sometimes sold, and given away without our knowledge or consent. CEOs sit before Congress saying they will “do better” while stories continue to break about negligence and wrong-doing.
Just what exactly is happening?
Breaches are just a symptom of the problem. The fundamentals of the relationship between customers and these companies are broken. I recently took the helm of the podcast IRL: Online Life is Real Life and spoke to Shoshana Zuboff, author of The Age of Surveillance Capitalism who explained further how most tech companies have built their businesses on the data they collect by tracking their users’ behavior. “We all need to better grasp what the trade offs really are, because once you learn how to modify human behavior at scale, we’re talking about a kind of power now invested in these private companies,” she told me. I know. The situation is messed up and it makes you want to put your head in the sand and give up on digital privacy.
Please don’t do that. Fixing our online privacy problem requires both individual and collective action. Support organizations pressuring Congress and Silicon Valley to begin to claw back our digital civil rights and take some simple steps right now to protect your families and send a message to tech companies.
Yes, doing these things is annoying and tedious but it does matter:
Be more choosy about your technology. There’s no need to go “off the grid,” but choosing products and companies that respect you and your data – like the Firefox browser and DuckDuckGo search engine – sends an important message to big companies that largely prioritize their shareholders over their customers. These smaller, user-focused apps and services have put ethics at the heart of their businesses and deserve to be downloaded.
Become a privacy settings ninja. Most sites and apps have privacy settings you can access, but they tuck them away several tabs deep. In a user-centric world, the default settings would take your privacy preferences into account and make them easier to update. Right now, as you’ve likely experienced, finding and adjusting your privacy settings is just hard enough that most of us give up or get distracted midway through trying to figure out what to click where. Gird yourself and press on! Try a data detox and reset your privacy options, step-by-step.
Educate yourself on how your data is accessed. Easier said than done, I know. That’s why I created a five-part bootcamp. The Privacy Paradox Challenge (from my Note to Self days) is a week of mini-podcasts and personal challenges that can help you get insight into how vast the issue is and how to get your privacy game on point.
On a recent episode of IRL, I spoke to Ellen Silver, VP of Operations at Facebook regarding the ever louder conversation about Facebook’s ethics. She assured me that Facebook is working to be more transparent. A few weeks later her boss, Mark Zuckerberg, made his 2019 New Year’s Resolution to “host a series of public discussions about the future of technology in society.” But we’ve heard promises from Facebook and other tech companies before. Let’s make sure they talk about privacy. Let’s continue asking all of the tech companies harder questions. And let’s start using our spending power to support companies that take our data as seriously as we do. Those are the next steps in this growing conversation about privacy. And that is indeed progress.
Firefox keeps your data safe. Never Sold.
Manoush Zomorodi is co-founder of Stable Genius Productions, a media company with a mission to help people navigate personal and global change. In addition to hosting Firefox’s IRL podcast, Manoush hosts Zig Zag, a podcast about changing the course of capitalism, journalism, and women’s lives. Investigating how technology is transforming humanity is Manoush’s passion and expertise. In 2017, she wrote a book, “Bored and Brilliant: How Spacing Out Can Unlock Your Most Creative Self” and gave a TED Talk about surviving information overload and the “Attention Economy.” She was named one of Fast Company’s 100 Most Creative People in Business in 2018.
There is a growing consciousness about the desire to keep one’s messages private. Some are concerned about hackers, or worry about foreign or domestic government surveillance, but most people just agree with the general principle that what you say in your chat conversations ought to stay between you and the people you chat with.
It’s not a pleasant idea to think that your messages could be archived for perpetuity on a large company’s server or analyzed by some algorithm. The quest for privacy has birthed a whole generation of apps that promise to give you exactly that. Services like Telegram and Signal have turned the phrase “end-to-end encryption” into a popular discussion. We’re here to help you figure out what this is all about and which apps to try.
A little background on encryption
Before we look at some specific apps, here’s a very brief explainer. Essentially, end-to-end encryption means that only the sender and the recipient can read the message. The message is encrypted on your phone, send to the recipient, and then decrypted. This prevents prying eyes from the telecom providers, government agencies, and even the company that hosts the service itself from being able to read your messages. This means they wouldn’t have the ability to hand over messages even if they were subpoenaed to by a government agency. And if a hacker broke into the messaging service’s servers, they couldn’t get at your conversations.
The desire for end-to-end (E2E) encryption isn’t just about those who don’t want the NSA to spy on them. In practice, it’s just about a basic sense that messages should be private. With that in mind, you have to be aware that just because something has the word “encrypted” doesn’t mean it is end-to-end encrypted. Some services will encrypt the message between the endpoints of transmission; your conversations are stored encrypted on the messaging service’s servers, but since they encrypted them, they can decrypt them.
The services we’re looking at here all feature end-to-end encryption.
One of the most popular apps in this space is Telegram. It’s been a pretty hot app for a couple of years, which is like 20 years in app time.
The most painstaking part is you need to invite all of your contacts into your new, secret chat world through the app’s navigation menu. It’s the biggest problem with using over-the-top services, as it doesn’t have the ubiquity of SMS messaging.
Once you’ve done this, you can message people individually or create group channels for talking with an unlimited number of other users. The upside here is you can escape the limitations of MMS messaging that usually caps you at a particular number of people. Your group can even be public, giving you a mini social network without all the trolls that plague the likes of Facebook and Twitter.
The interface is a little barren, but Telegram makes the list for its robust privacy and offering native apps for iOS, Mac, Windows, the web, and of course Android.
Signal’s claim to fame is that it’s the preferred messaging application of Edward Snowden. It’s among the easiest to set up, as it automatically authenticates your number and can even be used as your default SMS app.
As with Whisper, you can create a group for private banter with an unlimited number of other users. Signal also makes phone calls, which I found to be very clear when testing it out in a couple of different cases.
Signal isn’t optimized for tablets, but the company says that’s on the product roadmap. The design is no-frills with color variation for different contacts to help you from sending the wrong chat to an incorrect contact.
Another good option is Wire. It offers some fun messaging tricks, like the ability to doodle, share your location, send images, or record a video. The app also includes a chat bot, Anna, which offers somewhat useful answers to various questions about how to use the app.
You can optionally create an account with your phone number, which makes setup and account deletion easy. Wire is great for one-on-one chats if you would prefer conversations with someone be off the record. But it doesn’t have the same type of social or group features found with some of the other offerings here.
You also can’t forget about the uber-popular WhatsApp. Like the others on this list, it promises end-to-end encryption so your messages stay private. The biggest advantage is that the service, which is owned by Facebook, has over a billion users. There’s a very good chance you won’t have to convince all your friends and family to download the app.
That shouldn’t be discounted, as one of the pains of moving to a messaging service is convincing everybody to jump aboard. However, WhatsApp is now owned by Facebook, a connection that could make some wary, especially since the social network recently announced it’d be using some account information, including phone numbers, from WhatsApp. If your goal is a high threshold of privacy, then it’s worth keeping an eye on.
If you want to see messages disappear before your eyes, then Dust (formerly Cyber Dust) is the way to go. The brainchild of Dallas Mavericks owner Mark Cuban, the messages can disappear in 24 hours or as soon as they’re read, based on your preferences.
The company spells out its encryption policy, and includes a couple other features to ease your mind like chats that don’t show usernames, so even if someone took a screenshot it couldn’t necessarily be attributed to you.
The best app for you is going to depend upon your needs. Secure messaging is a huge and growing area of consumer interest, but it’s worth the effort if staying secure is what you’re after.
If ever there was a red flag story about Amazon’s Alexa then this is it.
If you watch the “Alexa for Medical Care Advice” video posted below, you will hear Alexa asking Peggy, to “tell me about the symptoms or problems that are troubling you the most.”
Divulging your health issues to a private corporation is extremely troubling as you will see.
Let’s start with the obvious concerns and talk about something you will not see in the video.
Like Peggy telling Alexa, it is none of Amazon’s business what her health concerns are and Alexa should stop listening to everything she says.
But many Americans do not have an issue with Alexa listening to their everyday conversations and have no problem asking Alexa health questions. Because, ‘they have nothing to hide’ — and therein lies the problem.
I challenge anyone to walk up to a stranger while recording the conversation and ask them about their health issues and see what happens. And if you really want to see what happens ask them about their kids’ health issues, etc. Would anyone like to guess what their response will be?
So if a stranger refuses to discuss their personal health issues with someone they do not know, why on earth would they trust Amazon?
Earlier this month, Amazon officially introduced “Alexa Healthcare Skills” which transmits and receives personal healthcare information.
But Alexa Healthcare does much more than just transmit and receive healthcare information.
Alexa can now call pharmacies, spy on kids and your blood sugar.
- Express Scripts (a leading Pharmacy Services Organization): Members can check the status of a home delivery prescription and can request Alexa notifications when their prescription orders are shipped.
- Cigna Health Today (by Cigna, the global health service company): Eligible employees with one of Cigna’s large national accounts can now manage their health improvement goals and increase opportunities for earning personalized wellness incentives.
- My Children’s Enhanced Recovery After Surgery (ERAS) (by Boston Children’s Hospital, a leading children’s hospital): Parents and caregivers of children in the ERAS program at Boston Children’s Hospital can provide their care teams updates on recovery progress and receive information regarding their post-op appointments.
- Swedish Health Connect (by Providence St. Joseph Health, a healthcare system with 51 hospitals across 7 states and 829 clinics): Customers can find an urgent care center near them and schedule a same-day appointment.
- Atrium Health (a healthcare system with more than 40 hospitals and 900 care locations throughout North and South Carolina and Georgia): Customers in North and South Carolina can find an urgent care location near them and schedule a same-day appointment.
- Livongo (a leading consumer digital health company that creates new and different experiences for people with chronic conditions): Members can query their last blood sugar reading, blood sugar measurement trends, and receive insights and Health Nudges that are personalized to them.
A few reasons to be concerned about Amazon Healthcare:
1.) Amazon is a for-profit corporation that makes its money by putting listening devices inside people’s homes.
Bloomberg revealed that a global team of Amazon workers is listening to people’s conversations.
Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices.
An article at Medium warns: Amazon listens to everything.
Imagine your horror as you open the attachments and begin listening to the recordings: A discussion of what to have for dinner, two children arguing over a toy, a woman talking to her partner as she gets into the shower.
2.) Besides the obvious privacy concerns of putting Alexa in your home, Alexa can be easily hacked and turned into an eavesdropping device.
When the attack [succeeds], we can control Amazon Echo for eavesdropping and send the voice data through network to the attacker.
3.) Amazon’s Healthcare partners act as though listening to people’s conversations is an act of benevolence.
“We believe voice technology, like Alexa, can make it easy for people to stay on the right path by tracking the status of their mail order prescription,” said Mark Bini, Vice President of Innovation and Member Experience, Express Scripts.
Mark Bini got one thing right: helping “people stay on the right path” will mean an increase in corporate profits as they data mine everything said by you and your family.
Cigna’s claim that divulging your personal health issues to Alexa allows customers to receive ” personalized wellness incentives for meeting their health goals” is just another way of saying corporate spying.
“Personalized wellness incentives” is corporate jargon for sending you advertising or increasing a person’s health insurance premiums if they do not meet their health goals.
Amazon did not become the most valuable company in the world by helping people. The only reason why Amazon and its partners care about your healthcare is so they can profit from it.
By Aaron Kesel
Airbnb is having more and more of its hosts hiding security cameras in rooms, and it doesn’t seem to be worried about the practice if innkeepers are disclosing the cameras and they aren’t in the bathrooms or bedrooms, according to a report by Fast Company.
“If you find a truly hidden camera in your bedroom or bathroom, AirBnB will support you. If you find an undisclosed camera in the private living room, AirBnB will not support you,” Jeffrey Bigham, a computer science professor at Carnegie Mellon University told Fast Company.
Bigham blogged about his recent experience at an Airbnb where he found cameras in his “private living room,” writing in a blog post, “A Camera is Watching You in Your AirBnB: And, you consented to it.”
“I just assume that there will be camera constantly recording when I stay in airbnb, or anywhere really. They way I never have to worry about whether it exist or not. As recording technology becoming more and more advance, it’s less and less reasonable to expect privacy. I rather adapt my life to fit this new culture,” Bigham wites.
Airbnb argued that since one single camera was visible in pictures advertising the rooms, the owner of the Airbnb rooms for rent disclosed the security cameras.
Airbnb has since apologized and has given Bigham a refund, according to CNET. A spokesperson provided the publication with the following statement:
Our community’s privacy and safety is our priority, and our original handling of this incident did not meet the high standards we set for ourselves. We have apologized to Mr. Bigham and fully refunded him for his stay. We require hosts to clearly disclose any security cameras in writing on their listings and we have strict standards governing surveillance devices in listings. This host has been removed from our community.
However, Bigham is far from the only Airbnb customer to find cameras in a room they rented; and while Bigham found his in a “private living quarters,” others have found them in more private places like the bathroom and bedrooms.
Another case happened last September in Toronto, Canada, where a couple — Dougie Hamilton and his girlfriend — rented an Airbnb flat and discovered hidden cameras in their bedroom, News.com.au reported.
Hamilton told the Daily Record:
We were only in the place for 20 minutes when I noticed the clock. We’d had a busy day around the city and finally were able to get to the Airbnb and relax.I just happened to be facing this clock and was staring at it for about 10 minutes. There was just something in my head that made me feel a bit uneasy.
It was connected to a wire like a phone charger which wasn’t quite right. The weirdest thing was, I’d seen a video on Facebook about cameras and how they could be hidden and they had a clock with one in it, too.
Last fall, a couple on a Florida vacation found a camera hidden in a smoke detector in the bedroom of their Longboat Key condo.
Another less recent case was posted on Reddit four years ago claiming a couple found a camera from a Dropcam, a connected home security system by Google’s Nest. The couple found the camera hidden in a mesh basket before unplugging it, according to the post.
According to Airbnb’s rules, the company states:
Our Standards & Expectations require that all members of the Airbnb community respect each other’s privacy. More specifically, we require hosts to disclose all surveillance devices in their listings, and we prohibit any surveillance devices that are in or that observe the interior of certain private spaces (such as bedrooms and bathrooms) regardless of whether they’ve been disclosed.
So how does one determine if there are potentially hidden cameras in a room? While there is no foolproof method for discovering hidden cameras in a room, there are ways that you can try to find them. Start off by shining a flashlight from your phone in the dark. Look for a light that bounces off a camera lens. According to Digital Trends, this will help you spot lenses that are otherwise hidden to the human eye in the shadows or built into objects such as clocks, walls, bureaus, and other furniture.
Other places that cameras can be hidden include:
- Motion sensors
- Smoke detectors
- Alarm clocks
- Wall clocks
- Plug in air fresheners (especially if they don’t give off any scent)
- Stuffed animals
- Books on a shelf (where a camera is embedded in the spine of a fake book)
- Cooking canisters and spice racks
The next way to find hidden surveillance devices has to deal with scanning for WiFi-enabled cameras on the local network. Motherboard has provided a relatively easy shell script to not only find the cameras but to disable them.
However, Julian Oliver explains that it may be illegal to run the script due to changes made by the FCC.
If you do find cameras, Airbnb adds that you can cancel your reservation for a full refund if the cameras aren’t disclosed or are found in an unreasonable area such as the bathroom or bedrooms. This is troubling and it’s not only affecting Airbnb but other services like it, such as VRBO and HomeAway. Both companies also have similar policies; VRBO says cameras are never to be placed in an area where guests “can reasonably expect privacy.” However, the problem is that some owners don’t follow those rules, so you must only trust yourself.
Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.
Image credit: Pixabay
- Scientists have a new computational method that expands the way commercial and public gene databases can be used by law enforcement to catch criminals
- While police still need a warrant to access data on sites like 23andMe and Ancestry.com, the new findings make it easier for them to track down criminals
- The development raises privacy concerns, as criminals can be tracked down through family members who submitted their DNA through commercial sites
Scientists are expanding the methods by which law enforcement can use DNA information gathered through sites like Ancestry.com and 23andMe to solve crimes – and they’re raising ethical questions along the way.
California detectives have already used DNA obtained through public databases to track down and arrest Joseph James DeAngelo, suspected of being the Golden State Killer, who police say committed at least 13 murders, more than 50 rapes, and more than 100 burglaries in California from 1974 to 1986.
But now, according to a study published Thursday in the scientific journal Cell, researchers say they have developed a new computational method that expands the way commercial and public gene databases can be cross-referenced with those used by law enforcement.
The result: More crimes solved and countless ethical concerns raised, as many people could be linked to crimes simply because a relative submitted DNA to find out more about their family’s ancestry. Warrants are required to obtain such information from private companies like Ancestry.com and 23andMe.
‘We think when we upload DNA that it’s our information, but it’s not. It’s the information of everyone who is biologically related to us,’ said Julia Creet, a Toronto-based researcher focused on the family history industry, in an interview with DailyMail.com.
The goal of using public and commercial databases to solve crimes is more technically challenging than it may seem, as they tend to rely on completely different genetic markers than those used by law enforcement to identify potential criminals.
‘There’s a legacy problem in that so many DNA profiles have been collected with this older genetic marker system that’s been used by law enforcement since the 1990s,’ said senior author Noah Rosenberg, of Stanford University. ‘The system is not designed for the more challenging queries that are currently of interest, such as identifying people represented in a DNA mixture or identifying relatives of the contributor of a DNA sample.’
Researchers set out to see if they could use the newer, more modern system of genetic markers used in commercial methods and test those against the older law enforcement system to obtain matches and find relatives.
The FBI and police departments across the U.S. use a DNA database known as the Combined DNA Index System, or CODIS. It relies on 20 DNA markers, which are different from those used in common ancestry databases – which scan across hundreds of thousands of sites in the genome.
Rosenberg and his team previously found that software could match people found in both databases even when there were no overlapping, or shared, markers between the two sets of data.
In their new research, the team found that the same approach could be applied to track down close family members – roughly 31 percent of the time in cases of parent-offspring pairs and 36 percent were linked among sibling pairs.
‘We wanted to examine to what extent these different types of databases can communicate with each other,’ Rosenberg said. ‘It’s important for the public to be aware that information between these two types of genetic data can be connected, often in unexpected ways.’
This means that law enforcement can more easily track down criminals even when the actual perpetrator has not submitted any DNA, as was the case in the Golden State Killer investigation.
In that instance, detectives submitted DNA collected from a crime scene for the traditional law enforcement genotyping, then used an open-source ancestry database called GEDmatch to link that profile to people who had voluntarily submitted their information into the database.
Many people who obtain their genetic data through commercial sites like Ancestry.com and 23andMe later upload that information into GEDmatch in an effort to connect with other family members around the globe.
Police found relatives who had overlapping DNA markers and through a process of elimination tracked down DeAngelo.
Police were able to access the information on GEDmatch because it was a public database, but law enforcement has successfully used warrants to obtain similar data directly from sites like Ancestry.com. Ancestry.com has 10 million customers, while 23andMe has more than 5 million.
A spokesman for 23andMe said the company uses strong security and privacy protocols to protect customer data – and that it warns customers that those guarantees don’t extend to any third-party sites where they choose to share their personal information.
A spokesman for Ancestry.com said the company prioritizes customer privacy and gives people a chance to opt out of sharing their information with other users. He added that the company will fight any efforts by law enforcement to obtain its data.
For those who have sought to learn more about their own background, the news could raise serious privacy concerns, said Creet, who directed and produced Data Mining the Deceased, a documentary about the industry of DNA and geneology.
‘Once your results have been uploaded into a collective database there is no way to extract them because other people have picked up on them,’ she said.
Joseph James DeAngelo. Police say DeAngelo is a suspected California serial killer known as the Golden State Killer who committed at least 13 homicides and 45 rapes throughout the state in the 1970s and ’80s.
Creet said anytime a person submits their DNA to a company to learn more about their heritage, it’s worth considering that they are also submitting 50 percent of their parents’ and children’s DNA, and 25 percent of their grandparents’ or grandchildren’s DNA.
‘Your results aren’t just about you,’ Creet added. ‘They’re about everyone you’re related to, which is how the police can use your results to track down a criminal who is distantly related to you. We’re not just talking about personal privacy; we’re talking about relational privacy.’
Experts are also concerned about how private companies are using genetic information they obtain. Ancestry.com is partnered with Calico, which is a Google spinoff, in an effort to analyze anonymized data from more than 1 million genetic samples.
Creet said that – given data breaches at other types of tech companies – there is significant cause for concern that even anonymized data could fall into the wrong hands and be re-identified.
Other issues could arise if health insurance companies were ever able to obtain genetic data on their customers and use that to discriminate against those with predispositions for certain diseases, she added.
‘This is a kind of Wild West of privacy protection and we’re dealing with the most personal type of information,’ Creet said.
Key Ethical Questions When Using DNA in Law Enforcement
DNA isn’t just personal: Do I want to share my sister’s DNA? Anyone who submits their own DNA for testing is also providing genetic info about any of their family members, which could later be accessed by law enforcement or for other unforeseen purposes.
Who owns the data: Who will get access to my DNA? A Google-spinoff company is partnering with Ancestry.com to analyze anonymized genetic data, which many users did not know at the time they submitted their DNA samples. Hacking is also a concern.
Regulation: How should lawmakers limit the usage of genetic data in solving crimes? Should health insurers have access to the information and be allowed to use it to charge more for people with predispositions to expensive health problems? Very little regulation currently exists defining how this data can be used.
Ultimately, society will have to decide if it’s worth losing a great deal of genetic privacy to solve violent crimes, said Dr. Ellen Clayton, a professor at Vanderbilt University’s Center for Biomedical Ethics and Society.
‘The question is: How do you control the downstream users?’ Clayton told DailyMail.com. ‘Do you want to say that law enforcement should only use these kinds of data to find people who you suspect are perpetrators of very heinous violent crimes: rape, murder, etc.?’
Society can reach an answer to that question through legislation that prohibits the use of genetic data for other purposes, such as immigration enforcement, employment discrimination, or to solve minor, nonviolent crimes, she said.
‘These are decisions that society can make. We need to have a robust conversation about that,’ she said.
Clayton also noted that the vast majority of DNA data in law enforcement’s CODIS is disproportionately representative of racial and ethnic minorities. At the same time, the people using private companies to find out their genetic identities tend to be white and of Northern European descent.
‘One reason that really matters is that they couldn’t find the Golden State Killer in CODIS because he was a police man. He was a white police man,’ Clayton said.
You’ve probably heard the old adage, “You can’t fight city hall!” Well, I did. And I won.
Last October, the city of Lexington, Kentucky, sued me in an attempt to keep its “mobile surveillance cameras” secret. Last week, in a major victory for government transparency, Fayette Circuit Judge John Reynolds issued an order granting my appeal for summary judgment. In simple terms, the judge rejected the city’s arguments for keeping its surveillance cameras secret and ordered the Lexington Police Department to release all relevant records.
An Initial Victory
My legal saga started last summer. After surveillance cameras appeared in a local skateboard park, I submitted an open records request to the LPD in an effort to determine what other surveillance programs it operates in Lexington. The police department admitted to using 29 mobile surveillance cameras “available for a variety of video surveillance operations.”
“Cameras are deployed as needed in support of active investigations in accordance with SOP BOI 93-46A, Criteria for Surveillance Conducted by Special Investigations Section,” they said.
While the police department acknowledged the existence of these cameras, it refused to provide any additional information other than redacted documents disclosing costs. The police claimed information about the types of cameras used and the policies surrounding their use were exempt under the state’s open records laws. The LPD cited a statute that exempts certain documents relating to homeland security, along with a second statute exempting certain “investigative reports.”
On appeal, the attorney general’s office rejected both exemptions claimed by the LPD and ordered the city to release the documents.
The City Retaliates—and Fails
On Oct. 2, 2017, a constable served me with a summons. The lawsuit was clearly intended to intimidate me into going away. The initial complaint even asked the judge to award the city court costs. Think about that for a moment. I simply asked for information relating to government activity. In response, the city sued me–a taxpayer–and demanded I foot the legal bill. So much for transparent government that serves “we the people.”
It was a shrewd strategy on the city’s part. City officials likely assumed I wouldn’t have the resources to pursue a court case, and I would just drop the matter. They were correct about the first assumption, but fortunately, the ACLU of Kentucky agreed to represent me in this case.
In court, the police basically argued that disclosing information about their cameras would render them ineffective and potentially jeopardize officer safety. It remains unclear how knowing what kind of “hidden” cameras the police own would make them ineffective. They also asserted that providing information about their surveillance activities would create an “undue burden.” In a nutshell, the city claimed that the investigation of crimes facilitated by the cameras constitutes “an important government interest” that warrants denial of the information.
While these may sound like compelling arguments on the surface, the city of Lexington failed to provide any basis for their assertions. On June 19, Judge Reynolds ruled that the city did not meet the standard of clear and convincing evidence required by the statute.
“In sum, this Court finds that the plaintiff, LFUCG, has failed to assert an applicable provision of the KRS or other binding precedent which would allow the denial of the information requested by Maharrey,” he wrote. “Therefore, LFUCG has failed to meet its burden of proof, and pursuant to ORA [Open Records Act] the requested information should be released for review by Maharrey.”
The city has 30 days to appeal or ask the circuit court for reconsideration. Otherwise, it must release the requested documents.
“We are the government”—or are we?
I’m not particularly comfortable casting myself as a little guy fighting the system. As the national communications director here at the Tenth Amendment Center (TAC), I had some firepower of my own and some resources at my disposal. Still, I could not have won this battle without the help of the ACLU of Kentucky and attorneys Clay Barkley and Heather Gatnarek. If I had been an average Lexingtonian, the city probably would have gotten its wish. I would have dropped the matter and gone away.
Make no mistake—this is a huge win for the people of Lexington. Those of us who live in this city have a right to know what our government does in our name. We have a right to weigh in and decide whether or not the benefit of surveillance technology outweighs the potential for abuse and violation of our basic privacy rights. We have a right to insist government agencies operate potentially invasive technology with oversight and transparency in a manner that respects our civil liberties.
Government secrecy steals power from the people. As the saying goes, sunlight is the best antiseptic. The city’s default position was to maintain secrecy, to keep the blinds closed, to slam the door in our face. Don’t let the fundamental nature of what happened to me escape you. When you boil it all down, the city sued me because I asked questions it didn’t want to answer. It kind of makes you wonder about the old adage, “We are the government,” doesn’t it?
Now, hopefully, we will get the kind of transparency we deserve. Whenever I talk about surveillance, people always ask me, ‘What do you have to hide?’ Well, I’ve been asking the city that question for nearly a year. I don’t think a little transparency and oversight is too much to ask for.
Building the Momentum of Accountability
I started a local group called We See You Watching Lexington to establish oversight and transparency of surveillance programs in this city. People shouldn’t have to get sued in order to find out what kind of surveillance programs the city operates. Furthermore, the city should not operate this kind of potentially invasive technology without firm policies in place directing how, when, and where it is used and establishing how information is stored and shared.
This is actually part of a broader movement of privacy localism that is taking on the surveillance state. The TAC has been involved in the Community Control Over Police Surveillance (CCOPS)initiative from the beginning and helped draft model legislation for a local surveillance ordinance that creates some level of transparency and oversight over local surveillance programs.
This is more than just a victory for me or even the people of Lexington. This is a win for all of us who care about liberty because it proves an important point. We can fight the government and win. Our efforts aren’t in vain. If I can do this, anybody can.
Here’s my challenge to you. Take what I’ve done and build on it. Get involved in your local community. Fight. If you don’t know how, we’ve got some resources to help. I put together a series of short podcasts called Activism 101. They offer simple step-by-step advice for starting activism in your own town. You can check out that series HERE.
Michael Maharrey is the national communications director at the Tenth Amendment Center. This article was sourced from FEE.org.
NOTE: In a nutshell, “The Tenth Amendment, or Amendment X of the United States Constitution is the section of the Bill of Rights that basically says that any power that is not given to the federal government is given to the people or the states.” (https://kids.laws.com/tenth-amendment)
The 18 things you may not realise Facebook knows about you: Firm reveals the extent of its spying in a 454-page document to Congress
- Facebook knows your exact mouse movements and battery status
- It can tell if your browser window is ‘foregrounded or backgrounded’
- In some cases, it monitors devices around its users or on the same network
- The details were revealed in document of answers to Congress following Mark Zuckerberg’s appearance in April over the Cambridge Analytica scandal
WHAT ARE THE 18 METHODS USED BY FACEBOOK TO TRACK USERS REVEALED IN LETTERS TO CONGRESS?
1. ‘Device information’ from ‘computers, phones, connected TVs, and other web-connected devices,’ as well as your ‘internet service provider or mobile operator’
2. ‘Mouse movements’, which can help distinguish humans from bots
3. ‘App and file names’, including the types of files on your devices
4. ‘Device operations’ such as whether a window running Facebook is ‘foregrounded or backgrounded’
5. ‘Device signals’, including ‘nearby Wi-Fi access points, beacons, and cell towers’ and ‘signal strength’ as well as Bluetooth signals
6. ‘Other devices that are nearby or on their network’
7. ‘Battery level’
8. ‘Available storage space’
9. ‘Plugins’ installed
10. ‘Connection speed’
11. ‘Purchases’ Facebook users make on third-party websites
12. Contact information ‘such as an address book’ and ‘call log or SMS log history’ for Android users with these settings synced
13. Information ‘about how users use features like our camera’
14. The ‘location of a photo or the date a file was created’ through the file’s metadata
15. ‘GPS location, camera, or photo’ information found through your device’s settings
16. Purchases from third-party data providers as well as other information about your ‘online and offline actions’
17. ‘Device IDs, and other identifiers, such as from games, apps or accounts users use’
18. ‘When others share or comment on a photo of them, send a message to them, or upload, sync or import their contact information’ text
The creepy ways Facebook spies on its users have been detailed in a bumper document presented to Congress.
They include tracking mouse movements, logging battery levels and monitoring devices close to a user that are on the same network.
The 454-page report was created in response to questions Mark Zuckerberg was asked during his appearance before Congress in April.
Lawmakers gave Zuckerberg a public grilling over the Cambridge Analytica scandal, but he failed to answer many of their queries.
The new report is Facebook’s attempt to address their questions, although it sheds little new light on the Cambridge Analytica scandal.
However, it does contain multiple disclosures about the way Facebook collects data.
Some are unsurprising, such as the time people spend on Facebook, while others may come as a shock to the majority of users.
Facebook tracks what device you are using to access the network.
To do this, it will log the hardware manufacturer of your smartphone, connected television, tablet, computer, or other internet-connected devices.
Facebook also tracks the operating system, software versions and web browser.
If you’re using a smartphone, it will keep a record of the mobile carrier, while internet service providers (ISPs) will be stored for users using a Wi-Fi or Ethernet connection to access Facebook.
In some cases, it will monitor devices that are using the same network as you.
‘Facebook’s services inherently operate on a cross-device basis: understanding when people use our services across multiple devices helps us provide the same personalized experience wherever people use Facebook,’ the firm wrote in the lengthy document.
According to Facebook, this is done, for example, ‘to ensure that a person’s News Feed or profile contains the same content whether they access our services on their mobile phone or in a desktop computer’s web browser.’
Facebook also says this information is used to curate more personalized ads.
(ANTIMEDIA) — While the nation remained fixated on gun control and Facebook’s violative practices last week, the U.S. government quietly codified the CLOUD Act, its own intrusive policies on citizens’ data.
While the massive, $1.2 trillion omnibus spending bill passed Friday received widespread media attention, the CLOUD Act — which lawmakers snuck into the end of the 2,300-page bill — was hardly addressed.
The Clarifying Lawful Overseas Use of Data Act (CLOUD) “updates the rules for criminal investigators who want to see emails, documents and other communications stored on the internet,”CNETreported. “Now law enforcement won’t be blocked from accessing someone’s Outlook account, for example, just because Microsoft happens to store the user’s email on servers in Ireland.”
The CLOUD Act will also allow the U.S. to enter into agreements that allow the transfer of private data from domestic servers to investigators in other countries on a case-by-case basis, further globalizing the ever-encroaching surveillance state. The Electronic Frontier Foundation, which has strongly opposed the legislation, listed several consequences of the bill, which it called “far-reaching” and “privacy-upending”:
- Enable foreign police to collect and wiretap people’s communications from U.S. companies, without obtaining a U.S. warrant.
- Allow foreign nations to demand personal data stored in the United States, without prior review by a judge.
- Allow the U.S. president to enter “executive agreements” that empower police in foreign nations that have weaker privacy laws than the United States to seize data in the United States while ignoring U.S. privacy laws.
- Allow foreign police to collect someone’s data without notifying them about it.
- Empower U.S. police to grab any data, regardless if it’s a U.S. person’s or not, no matter where it is stored.
The bill is an update to the current MLAT (Mutual Legal Assistance Treaty), the current framework for sharing internet user data between countries, which both legislators and tech companies have criticized as inefficient.
Some tech companies, like Microsoft, have endorsed the new CLOUD policy. Brad Smith, the company’s president and chief legal officer, called it “a strong statute and a good compromise,” that “gives tech companies like Microsoft the ability to stand up for the privacy rights of our customers around the world.”
They echoed the sentiment of lawmakers like Orrin Hatch (R-UT). In February, he said of the bill:
“The CLOUD Act bridges the divide that sometimes exists between law enforcement and the tech sector by giving law enforcement the tools it needs to access data throughout the world while at the same time creating a commonsense framework to encourage international cooperation to resolve conflicts of law.”
But one of the biggest complaints from privacy advocates, however, it that the new legislation places too much unmitigated power in the hands of governments with abysmal human rights records while also giving too much discretion to the U.S. government’s executive branch. Noting that the executive branch will decide which countries are human rights compliant and that those countries will then be able to engage in data collection and wiretaps without any further restrictions or oversight, the ACLU warned:
“Flip through Amnesty International or Human Rights Watch’s recent annual reports, and you can find a dizzying array of countries that have ratified major human rights treaties and reflect those obligations in their domestic laws but, in fact, have arrested, tortured and killed people in retaliation for their activism or due to their identity.”
The organization pointed out that no human rights organizations have endorsed the CLOUD Act, adding that “in the case of countries certified by the executive branch, the CLOUD Act would not require the U.S. government to scrutinize data requests by the foreign governments — indeed, the bill would not even require notifying the U.S. government or a user regarding a request.”
Further, the ACLU says, if a foreign government’s human rights record deteriorates, there is no mechanism to revoke its access to data. Considering the U.S.’ existing record on supporting regimes that severely restrict basic rights like freedom of expression, the expanded access the CLOUD Act provides is undoubtedly worrisome.
Also predictable is the government’s stale justification for expanding its power. As the CLOUD Act claims, it is purportedly to “protect public safety and combat serious crime, including terrorism” — even if it further empowers governments that support and commit said terrorism.
In an age where the government already engages in mass surveillance and is eager to disable the people’s efforts to protect their privacy through encryption technology, it is unsurprising, albeit dangerous, that Congress continues to encroach on what little is left of safeguards against unwarranted intrusions.
Website is asking for support:
We have a small favor to ask. Fewer and fewer people are seeing Anti-Media articles as social media sites crack down on us, and advertising revenues across the board are quickly declining. However, unlike many news organizations, we haven’t put up a paywall because we value open and accessible journalism over profit — but at this point, we’re barely even breaking even. Hopefully, you can see why we need to ask for your help. Anti-Media’s independent journalism and analysis takes substantial time, resources, and effort to produce, but we do it because we believe in our message and hope you do, too.
If everyone who reads our reporting and finds value in it helps fund it, our future can be much more secure. For as little as $1 and a minute of your time, you can support Anti-Media. Thank you. (Go to link to support)