And that’s okay, right?
Wait, let JP just plain it to you…;
And that’s okay, right?
Wait, let JP just plain it to you…;
A somewhat emotional video from a young practicing physician on Youtube, republished below, alerted us to this topic of medical snooping by the French government in order to allegedly stop / control / monitor the chain of infection (and us). Overall, this practice may become the next reason why we shouldn’t be visiting our family doctor or the local hospital starting from tomorrow, May 11, when our two-month(!) lockdown officially ends.
An MSM report of the French situation may be found here, but below are two actual doctors giving their feedback, one on video and one anonymously in writing. Readers may also be interested in this US article by Daisy Luther via Zerohedge on the rollout of contact tracing titled “Contact Tracer” And “Disease Investigator” Jobs Spring Up Across The Country.
So from tomorrow, the French family doctor (and the hospital doctor) will be the initiating person to identify a patient supposedly with covid, ask for the names and contact numbers of those people the patient has been in contact with (both within and without the immediate family), enter the information given into a centralized database, and do (unreliable) testing on the patient. At which point the non-medical staff of the French health insurance system will take over and send teams of people to test those contacts, hoping to find patient zero along the way. The initiating doctors themselves will get 55 euros instead of the regular fee of around 25 euros, plus 2 (or 4) extra euros for each contact name with a phone number.
The infection has likely been in France since at least October/November; confirmed cases were predicted to be going down around the time lockdowns were enforced in France (March 16) and the UK (March 23); Public Health England downgraded the severity of the disease on March 19. So is this all a case of a system and government justifying themselves to the public when, originally, they did absolutely nothing, telling us via the media that it was a Chinese problem? Likely the infection has been doing the rounds here for a while although it remains to be seen what kind of spike in cases will happen post-lockdown. Some government heads are expected to roll following the resumption of ‘normal’ life, so they must be anxious to be seen to be doing something. As well as finding a reason to implement Big Data surveillance systems on us.
The doctor in the very short video below raises some additional points:
The young guy’s overall point below is – I’m a doctor, not a cop.
Philippe Jandrok’s Blog, 7 May 2020
ATTENTION! … TO BE WIDELY DISTRIBUTED! NEW DRAMATIC DIRECTION IN THE ONGOING MADNESS!…
The total compromise of the SS in this so-called state of emergency, totally falsified and allowing all the most Orwellian excesses!
What I am reporting here is taken from a communication from the CGT Union of social security funds following a meeting with the national director of the fund, Mr. Nicola Revel, dated May 5, 2020.
It concerns the plan to mobilize the fund’s administrative employees (and not the fund’s medical personnel, who are supposed to be trained and protect the notion of medical secrecy!) to supposedly limit the spread of the post-lockdown virus.
It consists of the creation of a “brigade” (sic!) in the form of a telephone platform of 6,500 people at the national level, which they cynically call the “Guardian Angel Brigade” BAD (… Really, what a sense of humour!) supposed to carry out large-scale epidemic detection of the famous “contact cases,” identified by family doctors on the declaration of their Covid patients.
In order not to get rid of the increasingly invasive anglicisms, it is called “contact tracing”!
These agents will be employed 7 days a week, this by freezing their collective agreement, and with compulsory overtime, but not eligible for the scheme in question from 8am to 7pm .
But in high places, it causes no remorse!
The “contact tracing,” in fact, is old-fashioned: the family doctor diagnoses an infected patient. He tests him with a virological test, takes care of him and organizes his confinement as well as that of his close entourage. 3,000 to 5,000 cases will be expected per day starting May 11th according to Santé Publique France.
The doctor registers his patient in Ameli Pro, with his consent within 24 hours. (ER: Ameli.fr is the website portal for all health insurers in France.)
(ER: From another report, the 55 euros includes the normal 25 euro family doctor consultation fee.)
Mr Revel considers that the professional secrecy to which the employees of the Sécu are bound is sufficient to guarantee data protection. No details are given on what will happen to the data collected on Ameli Pro after the crisis is over.
On the other hand, it is confirmed that its twin, the SITEP tool (operated by DGS/AP-HP/Santé Publique France) will (together with the results of serological tests carried out in laboratories) make it possible to carry out epidemiological studies under cover of anonymity. As the CNIL has not given its opinion on the nature of the files created, it will arrive after the battle. So much the worse, when it comes to health data, as well as labour law – it is a matter of urgency!
Behind Ameli Pro, the agents will take over from the doctor to contact the “contact cases” by telephone. Their mission will be to convince everyone to get masks from the pharmacy, do a laboratory test and go into isolation while waiting for the results, with a work stoppage as backup.
In addition to the research and the relationship with the “contact cases”, the colleagues will also issue work stoppages.
Unanticipated risks to patients zero :
Patients (“patient zero”) who are the source of the trace will have the right to have their identity withheld from individuals who will be identified and quarantined. But only if they ask their doctor not to check the “does not wish to be identified” box in Ameli Pro. There is a risk there.
If claiming to be a known person (ER: an ‘infected’ person?) can make it easier to quarantine a third party, it could lead to retaliatory measures. There are environments where “snitching” is a serious thing. You have to be aware of this reality when you’re doing population tracing.
I’ll stop here. It’s edifying enough for anyone who still has their common sense. Not to mention that if medical ethics still had any meaning that was not misused, all doctors would have to resist and oppose the implementation of such a nightmare. But hey, most of them have seen their incomes drastically reduced during this epidemic! Yes, they have! It’s strictly attested to! And so… A big increase in income after a famine, it can be tempting!!…
Facebook has become so deeply ingrained in people’s lives that it has now become the norm to give it access to personal data without much thought, as if this is but a small price to pay for Facebook’s “free” service. But nothing could be further from the truth.
These traceable and sellable data now give Facebook the power to manipulate what we do, how we feel, what we buy and what we believe. The consequences of giving Facebook this much power is only becoming apparent, with mounting lawsuits against their security breaches and lousy privacy settings.
Even CrossFit, the well-established branded fitness regimen, has decided to stop supporting Facebook and its associated services, putting all their activities on Facebook and Instagram to a halt starting May 22, 2019. This decision came in the wake of Facebook’s deletion of the Banting7DayMealPlan user group, which was done without warning or explanation. The group has more than 1.65 million members who post testimonials regarding the efficiency of a low-carb, high-fat diet.
Although the group was later reinstated, Facebook’s action still shows how it acts in the interest of the food and beverage industry. You see, big advertisers on Facebook, like Coca-Cola, don’t want you to have access to this information, and Facebook is more than happy to ban anyone challenging the industrial food system. By doing this, it potentially contributes to the global chronic disease crisis.
Would you continue trusting a company that thinks too little of violating your rights to privacy?
If you think Facebook’s product is the very platform that users interact with, you’re wrong. You are actually Facebook’s primary product. The site makes money off you by meticulously tracking your hobbies, habits and preferences through your “likes,” posts, comments, private messages, friends list, login locations and more. It sells these data, along with your personal information, to whomever wants access to them, potentially facilitating everything from targeted advertising to targeted fraud — this is its entire profit model.
Did you know that it can even access your computer or smartphone’s microphone without your knowledge? So if you’re suddenly receiving ads for products or services that you just spoke out loud about, don’t be surprised — chances are one or more apps linked to your microphone have been eavesdropping on you. These privacy intrusions can continue even after you’ve closed your Facebook account.
Companies can also collect information about the websites you’re visiting or the keywords you’re searching for outside of Facebook’s platform without your permission, and then sell these data to Facebook so it knows which ads to show you. This makes Facebook the most infamous advertising tool ever created, and to increase revenue, it has to continue spying on you.
During Facebook’s early days, its founder, Mark Zuckerberg, assured in an interview that no user information would be sold or shared with anyone the user had not specifically given permission to. However, the site’s blatant disregard for its users’ privacy proves otherwise. In fact, Facebook has been repeatedly caught mishandling user data and lying about their data harvesting, resulting in multiple legal problems.
The origin of Facebook is also far from altruistic, even though it’s said to be created “to make the world more open and connected,” and “give people the power to build community.” A front-runner to Facebook was a site called FaceMash, which was created to rate photos of women — photos that were obtained and used without permission. Some of the women were even compared to farm animals! This speaks volumes about Zuckerberg’s disrespect for privacy. Facebook is basically founded on a misogynistic hate group and it should therefore ban itself.
Facebook is currently facing a number of lawsuits regarding its controversial data-sharing practices and poor security measures. Back in 2010, the U.S. Federal Trade Commission (FTC) revealed that Facebook was sharing user data with third-party software developers without the users’ consent, expressing concerns about the potential misuse of personal information, as Facebook does not track how third parties utilized them.
While Facebook agreed by consent order to “identify risk to personal privacy” and eliminate those risks, they did not actually pay attention to their security lapse. Had they done so, they would’ve been able to prevent the Cambridge Analytica scandal, the main focus of FTC’s first criminal probe. This issue involves Facebook’s deal with a British political consulting firm, allowing it access to around 87 million user data, which was used to influence public opinion in the U.S. presidential election.
Another criminal investigation into Facebook’s data sharing practice is underway. This time, it revolves around Facebook’s partnerships with tech companies and device makers, allowing them to override the users’ privacy settings and giving them broad access to its users’ information.
Amid federal criminal investigations, Zuckerberg announced the company’s latest plan to encrypt messages, so only the sender and the receiver will supposedly be able to decipher what they say. This is ironic, considering it was recently discovered that Facebook stored millions of user passwords in readable plaintext format in its internal platform, potentially compromising the security of millions of its users.
Zuckerberg has repeatedly demonstrated a complete lack of integrity when it comes to fulfilling his promises of privacy. In fact, in a 2010 talk given at the Crunchie awards, he stated that “privacy is no longer a social norm,” implying that using social media automatically strips you of the right to privacy, and that is why they do not respect it.
Facebook’s plan to integrate Instagram, Messenger and WhatsApp would turn it into a global super-monopoly. This merger has been criticized by tech experts, as it robs users of their ability to choose between messaging services, leaving them virtually no choice but to submit to Facebook’s invasive privacy settings. This also gives Facebook unprecedented data mining capabilities.
German antitrust regulator, Bundeskartellamt, is the first to prohibit Facebook’s unrestricted data mining, banning Facebook’s services in Germany if it integrates the three messaging platforms. If other countries follow suit, the merger would fall through, as it probably should.
One of the outspoken proponents of breaking up monopolies like Facebook, Google and Amazon is U.S. presidential candidate Sen. Elizabeth Warren, D-Mass. Her campaign to break up Facebook was censored by the site, taking down three of her ads with a message that said the ads went “against Facebook’s advertising policies.”
After Warren took to Twitter to comment how the censorship simply proves why her proposal was necessary, Facebook then reinstated her ads with the lame excuse that they were only removed because they included Facebook’s logo, which violates the site’s advertising policy.
At present, I have nearly 1.8 million Facebook followers, and I am grateful for the support. But a while back, I have expressed my concerns that perhaps I am doing more harm than good by being a part of Facebook, as I could be contributing to the invasive data mining, an idea that never sat well with me.
For those reasons, I decided that leaving the platform and going back to depending on email is the responsible way forward. If you haven’t subscribed to my newsletter yet, I urge you, your family and your friends to sign up now. I polled my audience and they agreed with my decision to leave.
Kids born in 2019 will be the most tracked humans in history. It’s predicted that by the time they turn eighteen, 70,000 posts about them will be in the internet ether. How and what you post about your child is a personal choice, but trusting that tech companies aren’t building dossiers on our children, starting with that first birth announcement, is a modern-day digital civil right we need to demand. As a mother myself, I want my children’s privacy to be a priority for tech makers.
I used to feel pretty lonely in that endeavor but over the last 12 months, I’ve noticed a trend: more and more people are talking about privacy. They’re calling out the companies that don’t take people’s online privacy seriously enough. They’re sharing articles detailing cover-ups and breaches. They’ve told me they want more privacy online and yet, feel trapped by the Terms of Service of the big platforms they need to use.
I think of this frustration as ‘digital wokeness’. And it’s the one good thing that came out of the Cambridge Analytica scandal. Though we’ve heard the reporting numerous times, let’s recall that from one personality quiz taken by 270,000 people, 87 million Facebook accounts were accessed. Tens of millions of people (maybe you) did not knowingly give permission for their information to be shared or manipulated by political operatives with questionable ethics.
We still don’t know exactly how this data collection and subsequent microtargeting of political content influenced our democratic process. But Cambridge Analytica is just one example. Everyday we hear about another undisclosed data breach. Private information being collected, sometimes sold, and given away without our knowledge or consent. CEOs sit before Congress saying they will “do better” while stories continue to break about negligence and wrong-doing.
Breaches are just a symptom of the problem. The fundamentals of the relationship between customers and these companies are broken. I recently took the helm of the podcast IRL: Online Life is Real Life and spoke to Shoshana Zuboff, author of The Age of Surveillance Capitalism who explained further how most tech companies have built their businesses on the data they collect by tracking their users’ behavior. “We all need to better grasp what the trade offs really are, because once you learn how to modify human behavior at scale, we’re talking about a kind of power now invested in these private companies,” she told me. I know. The situation is messed up and it makes you want to put your head in the sand and give up on digital privacy.
Please don’t do that. Fixing our online privacy problem requires both individual and collective action. Support organizations pressuring Congress and Silicon Valley to begin to claw back our digital civil rights and take some simple steps right now to protect your families and send a message to tech companies.
Listen to IRL: The Surveillance Economy
Be more choosy about your technology. There’s no need to go “off the grid,” but choosing products and companies that respect you and your data – like the Firefox browser and DuckDuckGo search engine – sends an important message to big companies that largely prioritize their shareholders over their customers. These smaller, user-focused apps and services have put ethics at the heart of their businesses and deserve to be downloaded.
Become a privacy settings ninja. Most sites and apps have privacy settings you can access, but they tuck them away several tabs deep. In a user-centric world, the default settings would take your privacy preferences into account and make them easier to update. Right now, as you’ve likely experienced, finding and adjusting your privacy settings is just hard enough that most of us give up or get distracted midway through trying to figure out what to click where. Gird yourself and press on! Try a data detox and reset your privacy options, step-by-step.
Listen to IRL: Your Password is the Worst
Educate yourself on how your data is accessed. Easier said than done, I know. That’s why I created a five-part bootcamp. The Privacy Paradox Challenge (from my Note to Self days) is a week of mini-podcasts and personal challenges that can help you get insight into how vast the issue is and how to get your privacy game on point.
On a recent episode of IRL, I spoke to Ellen Silver, VP of Operations at Facebook regarding the ever louder conversation about Facebook’s ethics. She assured me that Facebook is working to be more transparent. A few weeks later her boss, Mark Zuckerberg, made his 2019 New Year’s Resolution to “host a series of public discussions about the future of technology in society.” But we’ve heard promises from Facebook and other tech companies before. Let’s make sure they talk about privacy. Let’s continue asking all of the tech companies harder questions. And let’s start using our spending power to support companies that take our data as seriously as we do. Those are the next steps in this growing conversation about privacy. And that is indeed progress.
Firefox keeps your data safe. Never Sold.
Manoush Zomorodi is co-founder of Stable Genius Productions, a media company with a mission to help people navigate personal and global change. In addition to hosting Firefox’s IRL podcast, Manoush hosts Zig Zag, a podcast about changing the course of capitalism, journalism, and women’s lives. Investigating how technology is transforming humanity is Manoush’s passion and expertise. In 2017, she wrote a book, “Bored and Brilliant: How Spacing Out Can Unlock Your Most Creative Self” and gave a TED Talk about surviving information overload and the “Attention Economy.” She was named one of Fast Company’s 100 Most Creative People in Business in 2018.
There is a growing consciousness about the desire to keep one’s messages private. Some are concerned about hackers, or worry about foreign or domestic government surveillance, but most people just agree with the general principle that what you say in your chat conversations ought to stay between you and the people you chat with.
It’s not a pleasant idea to think that your messages could be archived for perpetuity on a large company’s server or analyzed by some algorithm. The quest for privacy has birthed a whole generation of apps that promise to give you exactly that. Services like Telegram and Signal have turned the phrase “end-to-end encryption” into a popular discussion. We’re here to help you figure out what this is all about and which apps to try.
Before we look at some specific apps, here’s a very brief explainer. Essentially, end-to-end encryption means that only the sender and the recipient can read the message. The message is encrypted on your phone, send to the recipient, and then decrypted. This prevents prying eyes from the telecom providers, government agencies, and even the company that hosts the service itself from being able to read your messages. This means they wouldn’t have the ability to hand over messages even if they were subpoenaed to by a government agency. And if a hacker broke into the messaging service’s servers, they couldn’t get at your conversations.
The desire for end-to-end (E2E) encryption isn’t just about those who don’t want the NSA to spy on them. In practice, it’s just about a basic sense that messages should be private. With that in mind, you have to be aware that just because something has the word “encrypted” doesn’t mean it is end-to-end encrypted. Some services will encrypt the message between the endpoints of transmission; your conversations are stored encrypted on the messaging service’s servers, but since they encrypted them, they can decrypt them.
The services we’re looking at here all feature end-to-end encryption.
One of the most popular apps in this space is Telegram. It’s been a pretty hot app for a couple of years, which is like 20 years in app time.
The most painstaking part is you need to invite all of your contacts into your new, secret chat world through the app’s navigation menu. It’s the biggest problem with using over-the-top services, as it doesn’t have the ubiquity of SMS messaging.
Once you’ve done this, you can message people individually or create group channels for talking with an unlimited number of other users. The upside here is you can escape the limitations of MMS messaging that usually caps you at a particular number of people. Your group can even be public, giving you a mini social network without all the trolls that plague the likes of Facebook and Twitter.
The interface is a little barren, but Telegram makes the list for its robust privacy and offering native apps for iOS, Mac, Windows, the web, and of course Android.
Signal’s claim to fame is that it’s the preferred messaging application of Edward Snowden. It’s among the easiest to set up, as it automatically authenticates your number and can even be used as your default SMS app.
As with Whisper, you can create a group for private banter with an unlimited number of other users. Signal also makes phone calls, which I found to be very clear when testing it out in a couple of different cases.
Signal isn’t optimized for tablets, but the company says that’s on the product roadmap. The design is no-frills with color variation for different contacts to help you from sending the wrong chat to an incorrect contact.
Another good option is Wire. It offers some fun messaging tricks, like the ability to doodle, share your location, send images, or record a video. The app also includes a chat bot, Anna, which offers somewhat useful answers to various questions about how to use the app.
You can optionally create an account with your phone number, which makes setup and account deletion easy. Wire is great for one-on-one chats if you would prefer conversations with someone be off the record. But it doesn’t have the same type of social or group features found with some of the other offerings here.
You also can’t forget about the uber-popular WhatsApp. Like the others on this list, it promises end-to-end encryption so your messages stay private. The biggest advantage is that the service, which is owned by Facebook, has over a billion users. There’s a very good chance you won’t have to convince all your friends and family to download the app.
That shouldn’t be discounted, as one of the pains of moving to a messaging service is convincing everybody to jump aboard. However, WhatsApp is now owned by Facebook, a connection that could make some wary, especially since the social network recently announced it’d be using some account information, including phone numbers, from WhatsApp. If your goal is a high threshold of privacy, then it’s worth keeping an eye on.
If you want to see messages disappear before your eyes, then Dust (formerly Cyber Dust) is the way to go. The brainchild of Dallas Mavericks owner Mark Cuban, the messages can disappear in 24 hours or as soon as they’re read, based on your preferences.
The company spells out its encryption policy, and includes a couple other features to ease your mind like chats that don’t show usernames, so even if someone took a screenshot it couldn’t necessarily be attributed to you.
The best app for you is going to depend upon your needs. Secure messaging is a huge and growing area of consumer interest, but it’s worth the effort if staying secure is what you’re after.
If ever there was a red flag story about Amazon’s Alexa then this is it.
If you watch the “Alexa for Medical Care Advice” video posted below, you will hear Alexa asking Peggy, to “tell me about the symptoms or problems that are troubling you the most.”
Divulging your health issues to a private corporation is extremely troubling as you will see.
Let’s start with the obvious concerns and talk about something you will not see in the video.
Like Peggy telling Alexa, it is none of Amazon’s business what her health concerns are and Alexa should stop listening to everything she says.
But many Americans do not have an issue with Alexa listening to their everyday conversations and have no problem asking Alexa health questions. Because, ‘they have nothing to hide’ — and therein lies the problem.
I challenge anyone to walk up to a stranger while recording the conversation and ask them about their health issues and see what happens. And if you really want to see what happens ask them about their kids’ health issues, etc. Would anyone like to guess what their response will be?
So if a stranger refuses to discuss their personal health issues with someone they do not know, why on earth would they trust Amazon?
Earlier this month, Amazon officially introduced “Alexa Healthcare Skills” which transmits and receives personal healthcare information.
But Alexa Healthcare does much more than just transmit and receive healthcare information.
Alexa can now call pharmacies, spy on kids and your blood sugar.
A few reasons to be concerned about Amazon Healthcare:
1.) Amazon is a for-profit corporation that makes its money by putting listening devices inside people’s homes.
Bloomberg revealed that a global team of Amazon workers is listening to people’s conversations.
Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices.
An article at Medium warns: Amazon listens to everything.
Imagine your horror as you open the attachments and begin listening to the recordings: A discussion of what to have for dinner, two children arguing over a toy, a woman talking to her partner as she gets into the shower.
2.) Besides the obvious privacy concerns of putting Alexa in your home, Alexa can be easily hacked and turned into an eavesdropping device.
When the attack [succeeds], we can control Amazon Echo for eavesdropping and send the voice data through network to the attacker.
3.) Amazon’s Healthcare partners act as though listening to people’s conversations is an act of benevolence.
“We believe voice technology, like Alexa, can make it easy for people to stay on the right path by tracking the status of their mail order prescription,” said Mark Bini, Vice President of Innovation and Member Experience, Express Scripts.
Mark Bini got one thing right: helping “people stay on the right path” will mean an increase in corporate profits as they data mine everything said by you and your family.
Cigna’s claim that divulging your personal health issues to Alexa allows customers to receive ” personalized wellness incentives for meeting their health goals” is just another way of saying corporate spying.
“Personalized wellness incentives” is corporate jargon for sending you advertising or increasing a person’s health insurance premiums if they do not meet their health goals.
Amazon did not become the most valuable company in the world by helping people. The only reason why Amazon and its partners care about your healthcare is so they can profit from it.
You can read more at the MassPrivateI blog, where this article first appeared.
By Aaron Kesel
Airbnb is having more and more of its hosts hiding security cameras in rooms, and it doesn’t seem to be worried about the practice if innkeepers are disclosing the cameras and they aren’t in the bathrooms or bedrooms, according to a report by Fast Company.
“If you find a truly hidden camera in your bedroom or bathroom, AirBnB will support you. If you find an undisclosed camera in the private living room, AirBnB will not support you,” Jeffrey Bigham, a computer science professor at Carnegie Mellon University told Fast Company.
Bigham blogged about his recent experience at an Airbnb where he found cameras in his “private living room,” writing in a blog post, “A Camera is Watching You in Your AirBnB: And, you consented to it.”
“I just assume that there will be camera constantly recording when I stay in airbnb, or anywhere really. They way I never have to worry about whether it exist or not. As recording technology becoming more and more advance, it’s less and less reasonable to expect privacy. I rather adapt my life to fit this new culture,” Bigham wites.
Airbnb argued that since one single camera was visible in pictures advertising the rooms, the owner of the Airbnb rooms for rent disclosed the security cameras.
Airbnb has since apologized and has given Bigham a refund, according to CNET. A spokesperson provided the publication with the following statement:
Our community’s privacy and safety is our priority, and our original handling of this incident did not meet the high standards we set for ourselves. We have apologized to Mr. Bigham and fully refunded him for his stay. We require hosts to clearly disclose any security cameras in writing on their listings and we have strict standards governing surveillance devices in listings. This host has been removed from our community.
However, Bigham is far from the only Airbnb customer to find cameras in a room they rented; and while Bigham found his in a “private living quarters,” others have found them in more private places like the bathroom and bedrooms.
Another case happened last September in Toronto, Canada, where a couple — Dougie Hamilton and his girlfriend — rented an Airbnb flat and discovered hidden cameras in their bedroom, News.com.au reported.
Hamilton told the Daily Record:
We were only in the place for 20 minutes when I noticed the clock. We’d had a busy day around the city and finally were able to get to the Airbnb and relax.I just happened to be facing this clock and was staring at it for about 10 minutes. There was just something in my head that made me feel a bit uneasy.
It was connected to a wire like a phone charger which wasn’t quite right. The weirdest thing was, I’d seen a video on Facebook about cameras and how they could be hidden and they had a clock with one in it, too.
Last fall, a couple on a Florida vacation found a camera hidden in a smoke detector in the bedroom of their Longboat Key condo.
Another less recent case was posted on Reddit four years ago claiming a couple found a camera from a Dropcam, a connected home security system by Google’s Nest. The couple found the camera hidden in a mesh basket before unplugging it, according to the post.
According to Airbnb’s rules, the company states:
Our Standards & Expectations require that all members of the Airbnb community respect each other’s privacy. More specifically, we require hosts to disclose all surveillance devices in their listings, and we prohibit any surveillance devices that are in or that observe the interior of certain private spaces (such as bedrooms and bathrooms) regardless of whether they’ve been disclosed.
So how does one determine if there are potentially hidden cameras in a room? While there is no foolproof method for discovering hidden cameras in a room, there are ways that you can try to find them. Start off by shining a flashlight from your phone in the dark. Look for a light that bounces off a camera lens. According to Digital Trends, this will help you spot lenses that are otherwise hidden to the human eye in the shadows or built into objects such as clocks, walls, bureaus, and other furniture.
Other places that cameras can be hidden include:
The next way to find hidden surveillance devices has to deal with scanning for WiFi-enabled cameras on the local network. Motherboard has provided a relatively easy shell script to not only find the cameras but to disable them.
However, Julian Oliver explains that it may be illegal to run the script due to changes made by the FCC.
If you do find cameras, Airbnb adds that you can cancel your reservation for a full refund if the cameras aren’t disclosed or are found in an unreasonable area such as the bathroom or bedrooms. This is troubling and it’s not only affecting Airbnb but other services like it, such as VRBO and HomeAway. Both companies also have similar policies; VRBO says cameras are never to be placed in an area where guests “can reasonably expect privacy.” However, the problem is that some owners don’t follow those rules, so you must only trust yourself.
Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.
Image credit: Pixabay
Scientists are expanding the methods by which law enforcement can use DNA information gathered through sites like Ancestry.com and 23andMe to solve crimes – and they’re raising ethical questions along the way.
California detectives have already used DNA obtained through public databases to track down and arrest Joseph James DeAngelo, suspected of being the Golden State Killer, who police say committed at least 13 murders, more than 50 rapes, and more than 100 burglaries in California from 1974 to 1986.
But now, according to a study published Thursday in the scientific journal Cell, researchers say they have developed a new computational method that expands the way commercial and public gene databases can be cross-referenced with those used by law enforcement.
The result: More crimes solved and countless ethical concerns raised, as many people could be linked to crimes simply because a relative submitted DNA to find out more about their family’s ancestry. Warrants are required to obtain such information from private companies like Ancestry.com and 23andMe.
‘We think when we upload DNA that it’s our information, but it’s not. It’s the information of everyone who is biologically related to us,’ said Julia Creet, a Toronto-based researcher focused on the family history industry, in an interview with DailyMail.com.
The goal of using public and commercial databases to solve crimes is more technically challenging than it may seem, as they tend to rely on completely different genetic markers than those used by law enforcement to identify potential criminals.
‘There’s a legacy problem in that so many DNA profiles have been collected with this older genetic marker system that’s been used by law enforcement since the 1990s,’ said senior author Noah Rosenberg, of Stanford University. ‘The system is not designed for the more challenging queries that are currently of interest, such as identifying people represented in a DNA mixture or identifying relatives of the contributor of a DNA sample.’
Researchers set out to see if they could use the newer, more modern system of genetic markers used in commercial methods and test those against the older law enforcement system to obtain matches and find relatives.
The FBI and police departments across the U.S. use a DNA database known as the Combined DNA Index System, or CODIS. It relies on 20 DNA markers, which are different from those used in common ancestry databases – which scan across hundreds of thousands of sites in the genome.
Rosenberg and his team previously found that software could match people found in both databases even when there were no overlapping, or shared, markers between the two sets of data.
In their new research, the team found that the same approach could be applied to track down close family members – roughly 31 percent of the time in cases of parent-offspring pairs and 36 percent were linked among sibling pairs.
‘We wanted to examine to what extent these different types of databases can communicate with each other,’ Rosenberg said. ‘It’s important for the public to be aware that information between these two types of genetic data can be connected, often in unexpected ways.’
This means that law enforcement can more easily track down criminals even when the actual perpetrator has not submitted any DNA, as was the case in the Golden State Killer investigation.
In that instance, detectives submitted DNA collected from a crime scene for the traditional law enforcement genotyping, then used an open-source ancestry database called GEDmatch to link that profile to people who had voluntarily submitted their information into the database.
Many people who obtain their genetic data through commercial sites like Ancestry.com and 23andMe later upload that information into GEDmatch in an effort to connect with other family members around the globe.
Police found relatives who had overlapping DNA markers and through a process of elimination tracked down DeAngelo.
Police were able to access the information on GEDmatch because it was a public database, but law enforcement has successfully used warrants to obtain similar data directly from sites like Ancestry.com. Ancestry.com has 10 million customers, while 23andMe has more than 5 million.
A spokesman for 23andMe said the company uses strong security and privacy protocols to protect customer data – and that it warns customers that those guarantees don’t extend to any third-party sites where they choose to share their personal information.
A spokesman for Ancestry.com said the company prioritizes customer privacy and gives people a chance to opt out of sharing their information with other users. He added that the company will fight any efforts by law enforcement to obtain its data.
For those who have sought to learn more about their own background, the news could raise serious privacy concerns, said Creet, who directed and produced Data Mining the Deceased, a documentary about the industry of DNA and geneology.
‘Once your results have been uploaded into a collective database there is no way to extract them because other people have picked up on them,’ she said.
Joseph James DeAngelo. Police say DeAngelo is a suspected California serial killer known as the Golden State Killer who committed at least 13 homicides and 45 rapes throughout the state in the 1970s and ’80s.
Creet said anytime a person submits their DNA to a company to learn more about their heritage, it’s worth considering that they are also submitting 50 percent of their parents’ and children’s DNA, and 25 percent of their grandparents’ or grandchildren’s DNA.
‘Your results aren’t just about you,’ Creet added. ‘They’re about everyone you’re related to, which is how the police can use your results to track down a criminal who is distantly related to you. We’re not just talking about personal privacy; we’re talking about relational privacy.’
Experts are also concerned about how private companies are using genetic information they obtain. Ancestry.com is partnered with Calico, which is a Google spinoff, in an effort to analyze anonymized data from more than 1 million genetic samples.
Creet said that – given data breaches at other types of tech companies – there is significant cause for concern that even anonymized data could fall into the wrong hands and be re-identified.
Other issues could arise if health insurance companies were ever able to obtain genetic data on their customers and use that to discriminate against those with predispositions for certain diseases, she added.
‘This is a kind of Wild West of privacy protection and we’re dealing with the most personal type of information,’ Creet said.
DNA isn’t just personal: Do I want to share my sister’s DNA? Anyone who submits their own DNA for testing is also providing genetic info about any of their family members, which could later be accessed by law enforcement or for other unforeseen purposes.
Who owns the data: Who will get access to my DNA? A Google-spinoff company is partnering with Ancestry.com to analyze anonymized genetic data, which many users did not know at the time they submitted their DNA samples. Hacking is also a concern.
Regulation: How should lawmakers limit the usage of genetic data in solving crimes? Should health insurers have access to the information and be allowed to use it to charge more for people with predispositions to expensive health problems? Very little regulation currently exists defining how this data can be used.
Ultimately, society will have to decide if it’s worth losing a great deal of genetic privacy to solve violent crimes, said Dr. Ellen Clayton, a professor at Vanderbilt University’s Center for Biomedical Ethics and Society.
‘The question is: How do you control the downstream users?’ Clayton told DailyMail.com. ‘Do you want to say that law enforcement should only use these kinds of data to find people who you suspect are perpetrators of very heinous violent crimes: rape, murder, etc.?’
Society can reach an answer to that question through legislation that prohibits the use of genetic data for other purposes, such as immigration enforcement, employment discrimination, or to solve minor, nonviolent crimes, she said.
‘These are decisions that society can make. We need to have a robust conversation about that,’ she said.
Clayton also noted that the vast majority of DNA data in law enforcement’s CODIS is disproportionately representative of racial and ethnic minorities. At the same time, the people using private companies to find out their genetic identities tend to be white and of Northern European descent.
‘One reason that really matters is that they couldn’t find the Golden State Killer in CODIS because he was a police man. He was a white police man,’ Clayton said.