Trackin’ Baby’s Poop

Huggies Now Selling Smart Diapers With Bluetooth Sensors Even Though Radiation Exposure From Them Isn’t Safe for Babies

By B.N. Frank

The idea of “Smart Diapers” for babies dates back a few years.   As noted in a recent Vox article, Huggies is now selling them in Korea and Japan and the U.S. and Mexico may be getting them next.

More companies are interested in creating and marketing these diapers as well as other “Smart” personal care products.  Besides being expensive, Bluetooth technology emits harmful wireless radiation and there is currently no safe level of wireless radiation exposure that has been determined for children or pregnant women.  In fact, 250 scientists have signed a petition which warns against numerous devices that emit Radio Frequency (RF) Radiation, which is used in WiFi and Bluetooth.

“Smart” Diapers also qualify as another source of “Surveillance Capitalism” since companies freely admit that they are able to gather data and track their customer use from the diaper sensors.

Regardless, companies are hoping that there is much money to be made especially since “Smart Diapers” for adults seems to already be a thriving market.  Poor grandma and grandpa…

That long march toward making smart diapers happen has been driven more by fears of slipping market shares than by any kind of real demand from consumers. The furious pace of innovation belies the fact that the US diaper market is in trouble. As the birthrate declines for the seventh year in a row, there are fewer and fewer new parents to buy diapers, and almost all major diaper brands have taken hits. After Kimberly-Clark, which manufactures Huggies, laid off 13 percent of its workers in January 2018, the CEO told investors, “You can’t encourage moms to use more diapers in a developed market where the babies aren’t being born in those markets.”

Last summer, to counter wilting sales, Pampers raised the price of its signature diaper by 4 percent. Huggies is making a bet different bet: By selling upscale diapers, it hopes to recoup the profits lost to a rapidly shrinking baby diaper market.

“The fact that the birthrates are quite low in the US has stirred a lot of interest in trying to get the consumer to spend more,” said Ali Dibadj, who tracks the personal products industry for the investment management group Sanford C. Bernstein. “The only way they can increase their business is to bring better products to the market. Their whole hope is to create products that the consumer base will pay more for.”

That puts Huggies squarely in line with other companies advocating seemingly unnecessary tech infusions into ordinary hygiene products on the bet that it will widen their profit margins. The brands behind the major US diapers have already flooded the market with “smart” toothbrushes, razors, and skin care wands, all of which they hope will entice wealthier consumers who can be convinced to drop the extra money.

Later this year, Procter & Gamble, which manufactures Pampers, is launching an AI toothbrush that claims to improve brushing. While typical electric toothbrushes cost around $30, P&G is planning to start its AI brush at $279, a massive price jump that foreshadows the future of the smart diaper. Kimberly-Clark, for its part, promised more “meaningful innovation” of its personal hygiene products, although the company already boasts everything from smart toilet paper to smart restrooms equipped with sensors that relay data about soap and toilet paper use.

[…]

There is not a lot to a smart diaper — the removable Bluetooth sensor, which resembles an orange disk, can be attached to the outside of any regular diaper. That sensor syncs to a Huggies smartphone app, where it relays information about the temperature and air quality, and — in addition to individual alerts about baby poop or pee — tracks the overall frequency of a baby’s bowel movements and calculates the times of day the diaper tends to need changing. No more than five people can register as guardians on the app. (Source: Vox)

Justification for purchasing this product is offered by Tony Park who developed the Bluetooth sensor used in Huggies’ smart diapers:

Park told Vox that the design is personal for him. Some babies, like his daughter, don’t cry when their diapers need changing, and figuring out when to switch diapers before a rash develops is a challenging guessing game. His target customers are millennial first-time parents who don’t have the time to constantly check diapers. “They are quite busy working two jobs,” he said. “They want to get involved in parenting, but they don’t have enough time to share with their baby. With our Monit device, they can get a notification whenever and wherever.”

Oh Tony.  Just because you can – doesn’t mean you should.

from:    https://www.activistpost.com/2019/05/huggies-smart-diapers-bluetooth-sensors-radiation-exposure.html

Privacy Matters

Did Cambridge Analytica Help to Create ‘Digital Wokeness’?

Kids born in 2019 will be the most tracked humans in history. It’s predicted that by the time they turn eighteen, 70,000 posts about them will be in the internet ether. How and what you post about your child is a personal choice, but trusting that tech companies aren’t building dossiers on our children, starting with that first birth announcement, is a modern-day digital civil right we need to demand. As a mother myself, I want my children’s privacy to be a priority for tech makers.

I used to feel pretty lonely in that endeavor but over the last 12 months, I’ve noticed a trend: more and more people are talking about privacy. They’re calling out the companies that don’t take people’s online privacy seriously enough. They’re sharing articles detailing cover-ups and breaches. They’ve told me they want more privacy online and yet, feel trapped by the Terms of Service of the big platforms they need to use.

I think of this frustration as ‘digital wokeness’. And it’s the one good thing that came out of the Cambridge Analytica scandal. Though we’ve heard the reporting numerous times, let’s recall that from one personality quiz taken by 270,000 people, 87 million Facebook accounts were accessed. Tens of millions of people (maybe you) did not knowingly give permission for their information to be shared or manipulated by political operatives with questionable ethics.

We still don’t know exactly how this data collection and subsequent microtargeting of political content influenced our democratic process. But Cambridge Analytica is just one example. Everyday we hear about another undisclosed data breach. Private information being collected, sometimes sold, and given away without our knowledge or consent. CEOs sit before Congress saying they will “do better” while stories continue to break about negligence and wrong-doing.

Just what exactly is happening?

Breaches are just a symptom of the problem. The fundamentals of the relationship between customers and these companies are broken. I recently took the helm of the podcast IRL: Online Life is Real Life and spoke to Shoshana Zuboff, author of The Age of Surveillance Capitalism who explained further how most tech companies have built their businesses on the data they collect by tracking their users’ behavior. “We all need to better grasp what the trade offs really are, because once you learn how to modify human behavior at scale, we’re talking about a kind of power now invested in these private companies,” she told me. I know. The situation is messed up and it makes you want to put your head in the sand and give up on digital privacy.

Please don’t do that. Fixing our online privacy problem requires both individual and collective action. Support organizations pressuring Congress and Silicon Valley to begin to claw back our digital civil rights and take some simple steps right now to protect your families and send a message to tech companies.


Listen to IRL: The Surveillance Economy


Yes, doing these things is annoying and tedious but it does matter:

Be more choosy about your technology. There’s no need to go “off the grid,” but choosing products and companies that respect you and your data – like the Firefox browser and DuckDuckGo search engine – sends an important message to big companies that largely prioritize their shareholders over their customers. These smaller, user-focused apps and services have put ethics at the heart of their businesses and deserve to be downloaded.

Become a privacy settings ninja. Most sites and apps have privacy settings you can access, but they tuck them away several tabs deep. In a user-centric world, the default settings would take your privacy preferences into account and make them easier to update. Right now, as you’ve likely experienced, finding and adjusting your privacy settings is just hard enough that most of us give up or get distracted midway through trying to figure out what to click where. Gird yourself and press on! Try a data detox and reset your privacy options, step-by-step.


Listen to IRL: Your Password is the Worst


Educate yourself on how your data is accessed. Easier said than done, I know. That’s why I created a five-part bootcamp. The Privacy Paradox Challenge (from my Note to Self days) is a week of mini-podcasts and personal challenges that can help you get insight into how vast the issue is and how to get your privacy game on point.

On a recent episode of IRL, I spoke to Ellen Silver, VP of Operations at Facebook regarding the ever louder conversation about Facebook’s ethics. She assured me that Facebook is working to be more transparent. A few weeks later her boss, Mark Zuckerberg, made his 2019 New Year’s Resolution to “host a series of public discussions about the future of technology in society.” But we’ve heard promises from Facebook and other tech companies before. Let’s make sure they talk about privacy. Let’s continue asking all of the tech companies harder questions. And let’s start using our spending power to support companies that take our data as seriously as we do. Those are the next steps in this growing conversation about privacy. And that is indeed progress.


Firefox keeps your data safe. Never Sold.

Download Firefox


Manoush Zomorodi is co-founder of Stable Genius Productions, a media company with a mission to help people navigate personal and global change. In addition to hosting Firefox’s IRL podcast, Manoush hosts Zig Zag, a podcast about changing the course of capitalism, journalism, and women’s lives. Investigating how technology is transforming humanity is Manoush’s passion and expertise. In 2017, she wrote a book, “Bored and Brilliant: How Spacing Out Can Unlock Your Most Creative Self” and gave a TED Talk about surviving information overload and the “Attention Economy.” She was named one of Fast Company’s 100 Most Creative People in Business in 2018.

from:    https://blog.mozilla.org/internetcitizen/2019/04/22/did-cambridge-analytica-help-to-create-digital-wokeness/?utm_medium=email&utm_source=email&utm_campaign=2019fxnews-en&utm_content=05032019

Some Options for Private Messaging

The best messaging apps with end-to-end encryption

If you want to keep prying eyes away from your conversations, then these are the apps that you need to get.

There is a growing consciousness about the desire to keep one’s messages private. Some are concerned about hackers, or worry about foreign or domestic government surveillance, but most people just agree with the general principle that what you say in your chat conversations ought to stay between you and the people you chat with.

It’s not a pleasant idea to think that your messages could be archived for perpetuity on a large company’s server or analyzed by some algorithm. The quest for privacy has birthed a whole generation of apps that promise to give you exactly that. Services like Telegram and Signal have turned the phrase “end-to-end encryption” into a popular discussion. We’re here to help you figure out what this is all about and which apps to try.

A little background on encryption

Before we look at some specific apps, here’s a very brief explainer. Essentially, end-to-end encryption means that only the sender and the recipient can read the message. The message is encrypted on your phone, send to the recipient, and then decrypted. This prevents prying eyes from the telecom providers, government agencies, and even the company that hosts the service itself from being able to read your messages. This means they wouldn’t have the ability to hand over messages even if they were subpoenaed to by a government agency. And if a hacker broke into the messaging service’s servers, they couldn’t get at your conversations.

The desire for end-to-end (E2E) encryption isn’t just about those who don’t want the NSA to spy on them. In practice, it’s just about a basic sense that messages should be private. With that in mind, you have to be aware that just because something has the word “encrypted” doesn’t mean it is end-to-end encrypted. Some services will encrypt the message between the endpoints of transmission; your conversations are stored encrypted on the messaging service’s servers, but since they encrypted them, they can decrypt them.

The services we’re looking at here all feature end-to-end encryption.

Telegram

One of the most popular apps in this space is Telegram. It’s been a pretty hot app for a couple of years, which is like 20 years in app time.

The most painstaking part is you need to invite all of your contacts into your new, secret chat world through the app’s navigation menu. It’s the biggest problem with using over-the-top services, as it doesn’t have the ubiquity of SMS messaging.

telegram

Telegram lets you create private or public channels for groups that you want to stay connected to.

Once you’ve done this, you can message people individually or create group channels for talking with an unlimited number of other users. The upside here is you can escape the limitations of MMS messaging that usually caps you at a particular number of people. Your group can even be public, giving you a mini social network without all the trolls that plague the likes of Facebook and Twitter.

The interface is a little barren, but Telegram makes the list for its robust privacy and offering native apps for iOS, Mac, Windows, the web, and of course Android.

Signal

Signal’s claim to fame is that it’s the preferred messaging application of Edward Snowden. It’s among the easiest to set up, as it automatically authenticates your number and can even be used as your default SMS app.

As with Whisper,  you can create a group for private banter with an unlimited number of other users. Signal also makes phone calls, which I found to be very clear when testing it out in a couple of different cases.

signal

Signal offers a lot of different features and can serve as your main messaging app.

Signal isn’t optimized for tablets, but the company says that’s on the product roadmap. The design is no-frills with color variation for different contacts to help you from sending the wrong chat to an incorrect contact.

Wire

Another good option is Wire. It offers some fun messaging tricks, like the ability to doodle, share your location, send images, or record a video. The app also includes a chat bot, Anna, which offers somewhat useful answers to various questions about how to use the app.

wire

Wire offers a chat bot and a number of different ways to get your message across.

You can optionally create an account with your phone number, which makes setup and account deletion easy. Wire is great for one-on-one chats if you would prefer conversations with someone be off the record. But it doesn’t have the same type of social or group features found with some of the other offerings here.

WhatsApp

You also can’t forget about the uber-popular WhatsApp. Like the others on this list, it promises end-to-end encryption so your messages stay private. The biggest advantage is that the service, which is owned by Facebook, has over a billion users. There’s a very good chance you won’t have to convince all your friends and family to download the app.

whatsapp material design

WhatsApp is a popular messaging app throughout the world.

That shouldn’t be discounted, as one of the pains of moving to a messaging service is convincing everybody to jump aboard. However, WhatsApp is now owned by Facebook, a connection that could make some wary, especially since the social network recently announced it’d be using some account information, including phone numbers, from WhatsApp. If your goal is a high threshold of privacy, then it’s worth keeping an eye on.

Dust

If you want to see messages disappear before your eyes, then Dust (formerly Cyber Dust) is the way to go. The brainchild of Dallas Mavericks owner Mark Cuban, the messages can disappear in 24 hours or as soon as they’re read, based on your preferences.

dust

Dust (formerly Cyber Dust) makes your messages disappear and offers an interesting social element.

The company spells out its encryption policy, and includes a couple other features to ease your mind like chats that don’t show usernames, so even if someone took a screenshot it couldn’t necessarily be attributed to you.

The best app for you is going to depend upon your needs. Secure messaging is a huge and growing area of consumer interest, but it’s worth the effort if staying secure is what you’re after.

Alexa Knows About You … Everything!

Amazon Alexa Wants To Spy On Your Family’s Health

By MassPrivateI

If ever there was a red flag story about Amazon’s Alexa then this is it.

If you watch the “Alexa for Medical Care Advice” video posted below, you will hear Alexa asking Peggy, to “tell me about the symptoms or problems that are troubling you the most.”

Divulging your health issues to a private corporation is extremely troubling as you will see.

Let’s start with the obvious concerns and talk about something you will not see in the video.

Like Peggy telling Alexa, it is none of Amazon’s business what her health concerns are and Alexa should stop listening to everything she says.

But many Americans do not have an issue with Alexa listening to their everyday conversations and have no problem asking Alexa health questions. Because, ‘they have nothing to hide’ — and therein lies the problem.

I challenge anyone to walk up to a stranger while recording the conversation and ask them about their health issues and see what happens. And if you really want to see what happens ask them about their kids’ health issues, etc. Would anyone like to guess what their response will be?

So if a stranger refuses to discuss their personal health issues with someone they do not know, why on earth would they trust Amazon?

Earlier this month, Amazon officially introduced “Alexa Healthcare Skills” which transmits and receives personal healthcare information.

But Alexa Healthcare does much more than just transmit and receive healthcare information.

Alexa can now call pharmacies, spy on kids and your blood sugar.

  • Express Scripts (a leading Pharmacy Services Organization): Members can check the status of a home delivery prescription and can request Alexa notifications when their prescription orders are shipped.
  • Cigna Health Today (by Cigna, the global health service company): Eligible employees with one of Cigna’s large national accounts can now manage their health improvement goals and increase opportunities for earning personalized wellness incentives.
  • My Children’s Enhanced Recovery After Surgery (ERAS) (by Boston Children’s Hospital, a leading children’s hospital): Parents and caregivers of children in the ERAS program at Boston Children’s Hospital can provide their care teams updates on recovery progress and receive information regarding their post-op appointments.
  • Swedish Health Connect (by Providence St. Joseph Health, a healthcare system with 51 hospitals across 7 states and 829 clinics): Customers can find an urgent care center near them and schedule a same-day appointment.
  • Atrium Health (a healthcare system with more than 40 hospitals and 900 care locations throughout North and South Carolina and Georgia): Customers in North and South Carolina can find an urgent care location near them and schedule a same-day appointment.
  • Livongo (a leading consumer digital health company that creates new and different experiences for people with chronic conditions): Members can query their last blood sugar reading, blood sugar measurement trends, and receive insights and Health Nudges that are personalized to them.

A few reasons to be concerned about Amazon Healthcare:

1.) Amazon is a for-profit corporation that makes its money by putting listening devices inside people’s homes.

Bloomberg revealed that a global team of Amazon workers is listening to people’s conversations.

Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices.

An article at Medium warns: Amazon listens to everything.

Imagine your horror as you open the attachments and begin listening to the recordings: A discussion of what to have for dinner, two children arguing over a toy, a woman talking to her partner as she gets into the shower.

2.) Besides the obvious privacy concerns of putting Alexa in your home, Alexa can be easily hacked and turned into an eavesdropping device.

When the attack [succeeds], we can control Amazon Echo for eavesdropping and send the voice data through network to the attacker.

3.) Amazon’s Healthcare partners act as though listening to people’s conversations is an act of benevolence.

“We believe voice technology, like Alexa, can make it easy for people to stay on the right path by tracking the status of their mail order prescription,” said Mark Bini, Vice President of Innovation and Member Experience, Express Scripts.

Mark Bini got one thing right: helping “people stay on the right path” will mean an increase in corporate profits as they data mine everything said by you and your family.

Cigna’s claim that divulging your personal health issues to Alexa allows customers to receive ” personalized wellness incentives for meeting their health goals” is just another way of saying corporate spying.

“Personalized wellness incentives” is corporate jargon for sending you advertising or increasing a person’s health insurance premiums if they do not meet their health goals.

Amazon did not become the most valuable company in the world by helping people. The only reason why Amazon and its partners care about your healthcare is so they can profit from it.

You can read more at the MassPrivateI blog, where this article first appeared.

from:    https://www.activistpost.com/2019/04/amazon-alexa-spy-family-health.html

The DNA Database & We

Ancestry Websites Giving FBI Access to DNA Data; WikiLeaks Reveals CODIS Database Gifted To Other Countries; DHS Rolling Out Rapid DNA Nationwide

By Aaron Kesel

The FBI is abusing ancestry genealogy websites by tapping into their DNA data. What’s worse, these companies are giving up users’ data under presumed consent that is buried in their terms and conditions, according to several reports.

FamilyTreeDNA is the first company known to be cooperating directly with the FBI to give its agents access to its genealogy database, according to a BuzzFeed report.

A Family Tree DNA spokesperson told BuzzFeed that FamilyTree DNA’s agreement with the FBI gives the agency the ability to search more than a million genetic profiles — the majority of which were given by their customers without knowledge of the company’s relationship with the FBI. As part of the arrangement, Family Tree DNA has further agreed to test DNA evidence and identify the remains of deceased individuals in violent crimes for the FBI in its own laboratory.

In a statement, FamilyTreeDNA said that customers have the ability to opt out of matching features in their account settings. Doing so would prevent law enforcement from accessing their genetic information, but it also means a user would be unable to find potential family members through the service. According to Gizmodo, the company also seems to admonish those who choose to opt out by suggesting that it could be a “moral responsibility” to give up their private health information to the FBI.

However, the fact of genealogy companies are being subpoenaed by law enforcement isn’t a secret. In fact, it’s in the disclosures on their websites — FamilyTreeDNAAncestryDNA, and 23andMe.

Forensic magazine reports that the FBI had previously had access to FamilyTreeDNA’s database before the partnership with the FBI.

After news broke that the FBI was accessing user data, FamilyTreeDNA announced that it would allow its customers to bar law enforcement from accessing their data, Engadget reported.

As an interesting corporate connection to make, one of the co-founders of 23andMe, Anne Wojcicki, is married to Google’s Sergey Brin. Unsurprisingly, Google Inc. also backs the DNA analysis company.

Last year, Drug giant GlaxoSmithKline invested US$300 million in the DNA-testing company in a deal that should raise eyebrows. A drug company working together with a DNA database company … what could possibly go wrong?

Under the deal, GSK has exclusive rights for four years to use 23andMe’s DNA database to develop new medicines using human genetics.

Activist Post reported last year Houston police launched a pilot program with the company ANDE to test a machine called Rapid DNA that runs DNA tests in under two hours.

Local news station KHOU11 reported,

“This rapid DNA is the future. It comes down to when mathematicians stopped using abacuses and started using calculators. It’s that important to criminal justice,” said Lt. Warren Meeler, Houston Police Department, Homicide Division.

As part of the test program, proper protocol for using the technology has been to swab each piece of evidence twice. First, the Houston Forensic Science Center (HFSC) takes an official sample for the lab, then Houston police take a second sample for the trial machine.

Rapid DNA results can’t be used in court, and the technology is only used for investigations in Houston, according to the news outlet.

The technology has some forensic scientists worried about whether it should be used at crime scenes, warning about the accuracy of the technology.

“I think everybody is comfortable that if there is a high concentration of DNA from a single source, so an oral swab from an individual, we’re confident the instruments produce good data. The questions start to come in circumstances where we’ve got touch DNA — smaller quantities of DNA, more mixtures, there’s more people on that doorknob that I’m swabbing – there I’m not sure anybody knows yet,” said Dr. Peter Stout, President and CEO of the Houston Forensic Science Center.

However, further research shows that Houston isn’t the only city using rapid DNA, police departments across the country—have rolled out their own pilot programs to test these miniature portable DNA lab machines that originate from the DHS.

“Rapid DNA, a newly commercialized technology developed by the Department of Homeland Security (DHS) Science and Technology Directorate (S&T), addresses these challenges by greatly expediting the testing of deoxyribonucleic acid (DNA) that is the only biometric that can accurately verify family relationships. This technology can be used on the scene of mass fatality events, in refugee camps around the world, or at immigration office,” the DHS’s website reads.

Police departments in Maryland,  PennsylvaniaSouth CarolinaFloridaUtah Arizona TexasCalifornia and in Delaware are or will be using DHS’s Rapid DNA.

An article in ProPublica warns that “over the last decade, collecting DNA from people who are not charged with — or even suspected of — any particular crime has become an increasingly routine practice for police.”

Congress enacted the “DNA Identification Act of 1994” authorizing the FBI to maintain a centralized, national DNA database and to develop a software system to allow for the sharing of information within and between states for law enforcement. By 2004, the resulting system – the Combined DNA Index System (CODIS) – connected the databases of all fifty states, which at that time were limited to profiles from those convicted of serious, violent crimes. Signed into law by President George W. Bush on October 30, 2004, the “Justice For All Act”  greatly expanded the CODIS system, allowing collection of DNA from all federal felons and further enabling states to upload to CODIS profiles from anyone convicted of a crime according to a secret congressional WikiLeaks document entitled: “DNA Evidence: Legislative Initiatives in the 106th Congress.”

On January 5, 2006, a barely noticed piece of legislation entitled the “DNA Fingerprint Act of 2005” was also signed into law by President George W. Bush, that severely expanded the government’s authority to collect and permanently retain DNA samples. The bill slipped through virtually unnoticed because the law was, buried in the back of the Violence Against Women Act (VAWA) reauthorization bill.

Unbeknownst to the public, the bill granted the government authority to obtain and permanently store DNA from anyone who is arrested as well as non-U.S. citizens detained under federal authorities like Border Control and DHS.

In December of 2015 nearly 10 years later, results from a rapid DNA device were submitted as evidence in a successful murder prosecution for the first time attempted murder case in Richland County, South Carolina. (That article now has been curiously deleted from Reuters and is only available on archive.org)

A bill before Congress, introduced on December 2015 by Sen. Orin Hatch, R-Utah, called for profiles collected by Rapid DNA devices to be connected to the FBI’s Combined DNA Index System, or CODIS, the software and national database that stores DNA profiles from federal, state and local forensic laboratories.

During a Senate committee hearing on the Rapid DNA Act of 2015, disgraced former FBI Director James Comey said that passage of the bill “would help us change the world in a very, very exciting way. It will allow us, in booking stations around the country, if someone’s arrested, to know instantly—or near instantly—whether that person is the rapist who’s been on the loose in a particular community before they’re released on bail and get away or to clear somebody, to show that they’re not the person.”

In 2017, Sen. Charles Grassley (R-IA) introduced “the SECURE Act” (S. 2192) on December 5th. The bill largely borrows from two other federal bills—H.R. 3548 and S. 1757

The Rapid DNA Act of 2017, S.139 and HR.510 passed last year, amended the DNA Identification Act of 1994, allowing previous hurdles to be surpassed by the new technology.

The bill was sponsored by U.S. Senate sponsor Senator Orrin Hatch (R-UT) and lead co-sponsor Senator Dianne Feinstein (D-CA) as well as House sponsor Congressman James Sensenbrenner (R-WI) and lead co-sponsor Congressman Eric Swalwell (D-CA), along with 12 Senate and 24 House co-sponsors for their support, Business Wire reported.

“Today marks a landmark day in more efficiently fighting crime and supporting law enforcement,” stated Robert Schueren, President and CEO of IntegenX. “IntegenX products have already enabled numerous DNA profile uploads to our nation’s DNA database (CODIS). We look forward to the updated FBI guidelines, and subsequent CODIS uploads from the booking environment.”

“Rapid DNA is a promising new technology and an effective tool for law enforcement – I’m thrilled to be seeing it signed into law. This technology will help quickly identify arrestees and offenders, reduce the overwhelming backlog in forensic DNA analysis, and make crime fighting more efficient while helping to prevent future crimes from occurring. It will also save time and taxpayer dollars,” commented Congressman Sensenbrenner, Chairman of the House Judiciary Subcommittee on Crime, Terrorism, Homeland Security and Oversight.

“This bill will help law enforcement agencies solve crimes faster and help those wrongfully accused to be exonerated from crimes they did not commit—almost instantly. The Rapid DNA Act updates the statutory framework in how DNA samples are entered into the FBI’s Combined DNA Index System by allowing the use of this remarkable Rapid DNA technology,” stated Senator Hatch.

In 2017, President Trump signed into law the Rapid DNA Act, which, enables police booking stations in several states to connect their Rapid DNA machines to CODIS, the national DNA database.

But CODIS isn’t only shared by the states. We learn from a Plus D WikiLeaks release, that the DNA information processing and telecommunications system was gifted to Argentina in 2009 by U.S. Ambassador Earl Wayne, according to a cable. The system was gifted to “help the province solve crimes and exonerate innocent suspects.”

“On the very topical issue of crime and personal security, the Ambassador helped launch the province’s participation in the Combined DNA Indexing System (CODIS). CODIS, an automated DNA information processing and telecommunications system, was donated by the FBI,” the cable reads.

Meanwhile, another WikiLeaks Plus D cable talks about “specialized training and state of the art equipment donations enabling Colombian forensic labs to investigate human rights violations more effectively. These donations included the enhancement of DNA analyzers and the CODIS database; upgrading of the Integrated Ballistics Identification System (IBIS); updating of forensic imaging and document analysis systems; upgrading of the automated fingerprint identification system; and the design and installation of a wireless network providing inter-agency connectivity and information sharing,” according to the cable, entitled: “SUPPORTING HUMAN RIGHTS AND DEMOCRACY: THE U.S. RECORD IN COLOMBIA 2004-2005.”

This leads us to several questions.  First, how many more countries were given access to the CODIS system; is this DNA database shared amongst countries in an agreement similar to the Five Eyes spying arrangement, or did the U.S. sell the software similar to the infamous PROMIS software? And, like PROMIS (Inslaw scandal), does this software have a backdoor for U.S. intel agencies to access other countries’ DNA data?

These are all questions we should find ourselves asking.

Even the DHS is looking into using the Rapid DNA technology for immigration purposes to stop adults fleeing with kids and ensure that they are their actual relatives. But later the DHS postponed the technology in 2015 to develop a stricter protocol for its use, Nextgov reported.

“The implementation of the program has been postponed until new voluntary consent forms are developed as well as operational protocols for translation,” Department of Homeland Security spokesman John Verrico told Nextgov in an email.

DHS documents obtained by the EFF state that the military may be interested in using rapid DNA in the future to reveal information about individuals such as their sex, race, health, and age.

In a 2013 privacy impact assessment for Rapid DNA pilot testing, the DHS stated that the portion of DNA analyzed by the devices does not reveal any “sensitive information about an individual, and will not, under any circumstances, be used for decisions based on those criteria.”

The EFF disagrees with Comey and the DHS, and has previously stated that the test pilot DNA program “may create controversy,” according to internal documents obtained by the Electronic Frontier Foundation civil liberties group. In a high priority e-mail from 2011, a DHS officer wrote to colleagues that “if DHS fails to provide an adequate response to media inquiries regarding RapidDNA quickly, civil rights/civil liberties organizations may attempt to shut down the test program.”

There are already numerous issues with keeping a DNA data bank. Privacy and civil rights advocates and watchdog groups have argued against the practice in California of retaining DNA from legally innocent people, thereby violating constitutional privacy rights, Mercury News reported.

Further, forensic labs (including the FBI) have shown flaws over the last few years exposing shoddy laboratory procedures including – grossly inaccurate testimony by law enforcement, and, in a few cases, outright false documentation or mixing up of results. DNA has been constantly linked to the wrong person similar to facial recognition biometric data.

If that’s not all reason enough for us to be skeptical about these systems, in 2015, the FBI found DNA data errors within its own national CODIS database, The Washington Post reported.

In another case, familial DNA was the culprit responsible for a false positive on a murder in Idaho. This resulted in Michael Usry in a police station with an FBI agent cotton swabbing him as he was completely confused by what was happening, Wired reported in 2015.

While genetics might be able to identify a felon, forensic scientists and lawyers agree that the information gathered can’t be able to gather more than that. As the Supreme Court wrote in its Maryland v King decision to allow DNA collection, this issue is “open to dispute.”

Forensic magazine notes the dangers of a DNA database, stating its a threat to “medical privacy.”

These genetic databases are an absolute gold mine for law enforcement. I am not sure anyone can argue that catching serial killers and rapists, or using CODIS for tracking missing children is bad; however, problems start to arise when these genetic databases are used to target people for deportation or sweep up the completely innocent in its dragnet.

Along with facial recognition, DNA databases are the first step towards an Orwellian society where the government knows your whereabouts, at all times. It’s a nightmarish outlook for our future; but what’s worse in some instances, like in the form of DNA, we are being tricked to give up our freedoms and privacy.  As a CRS Congressional “think tank” report warned:  “future DNA collection cases might raise graver Fourth Amendment privacy concerns than previous cases.”

The FBI plans to begin rolling out Rapid DNA to more police departments slowly in 2019, according to a Washington Post report.

“Our goal in 2019 is to be able to have a pilot project done where we actually develop a DNA profile in a booking station, with no human review, and have it electronically enrolled and searched in the national database,” Thomas Callaghan, chief biometric scientist for the FBI Laboratory, told the news outlet. “We have to ensure that the quality that’s done in a lab can be done in a booking station.

Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

from:    https://www.activistpost.com/2019/04/ancestry-websites-giving-fbi-access-to-dna-data-wikileaks-reveals-codis-database-gifted-to-other-countries-dhs-rolling-out-rapid-dna-nationwide.html

YouTube versus “Conspiracy” Content

1984: YouTube Will Demote Conspiracy Videos In Its Recommendation Algorithm — A Scary Prospect For Freedom Of Thought

By Aaron Kesel

YouTube said Friday it will stop recommending conspiracy videos such as those claiming the Earth is flat, or promoting alternative theories about the September 11, 2001 attacks.

We’ll continue that work this year, including taking a closer look at how we can reduce the spread of content that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines. To that end, we’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.

While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community. To be clear, this will only affect recommendations of what videos to watch, not whether a video is available on YouTube. As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.

This change relies on a combination of machine learning and real people. We work with human evaluators and experts from all over the United States to help train the machine learning systems that generate recommendations. These evaluators are trained using public guidelines and provide critical input on the quality of a video.

While the former is a psyop — the Earth obviously isn’t flat and is a spheroid — the latter is the more worrying contention, since to this day there are still valid questions about 9/11. For information on 9/11 that doesn’t quite add up, you only need to watch two of James Corbett’s YouTube documentary films: 9/11 War Games and 9/11 Trillions: Follow The Money.

This also follows the news that a NYC Federal Grand Jury has been empaneled to investigate the claims made by Architects and Engineers for 9/11 Truth, which will look into the evidence of the World Trade Towers being a controlled demolition operation with thermite.

This YouTube algorithm and policy change further comes as a mysterious group The Dark Overlord (TDO) has claimed to hack “the truth behind 9/11,” by breaching numerous different insurers and legal firms, claiming specifically that it hacked Hiscox Syndicates Ltd, Lloyds of London, and Silverstein Properties. While not much has come out of the hack, there was one curious document alluding to military intervention in Flight 93, which if you remember was said to have been civilians who brought down the plane in a heroic move, not military intervention.

Activist Post previously reported that YouTube was planning to combat conspiracy-driven videos by introducing informative debunking boxes linking back to Wikipedia and other sources. Although it seems that’s not enough, and now they have to remove “conspiracy videos” from suggested videos as well.

We also reported that since Google was heading towards targeting critical thinkers — demonized as “Conspiracy Theorists” — who ask the difficult questions in its rating guidelines, YouTube wouldn’t be too long to follow those actions. It seems we were right!

Considering that the origination of the word “Conspiracy Theorist” comes from the CIA, I would say using a derogatory word to discuss those who think is dangerous. More modernized, in fact, it is also straight out of the JTIRG playbook that NSA whistleblower Edward Snowden revealed.

Misinformation is plaguing the Internet, but who is to decide what is and isn’t misinformation?  The readers themselves need to, because policing thought and opinion opens a door to the avenue of a Truth Council and information oversight where admins (the purveyors of truth) decide what is and isn’t fact. What happens when one of these people doesn’t dig deep enough and just dismisses something without looking at the evidence, due to lack of information or understanding? Censorship of not only ideas but also people as a whole who are effectively removed from the discussion.

As discussed in this reporter’s last article entitled “YouTube Purge: The End Of Freedom Of Expression Or The Great Awakening For Alternatives?” – questioning is healthy; and as writer Naomi Wolf exposed, you should think before it’s illegal to do so. “It’s no longer crazy to assess news events to see if they are real or not real,” she stated in the video below. As history has shown through declassified documents (overthrow of Mossadegh), leaked diplomatic cables by WikiLeaks, and reporting by murdered journalist Michael Hastings who exposed propaganda used against the Senate and Congress, “all over the world, it’s well-established, the State Department intelligence agencies engage in theatre, and it’s what they do, it’s spycraft, to create spectacles and events that people may not realize are spectacles and events…,” Naomi says.

Hastings exposed the use of propaganda to get into Afghanistan in his report entitled: “The Afghanistan Report the Pentagon Doesn’t Want You to Read.” The article was surrounding a leaked unclassified Pentagon report.  The report took the shroud off the U.S. military’s psyops operation command revealing several techniques the group uses in psychological warfare to manipulate the public, including but not limited to fake intelligence information, lack of information and social media manipulation, according to Lt. Colonel Daniel Davis. The kicker is that not only were those tactics used against the American people but the tactics were used against Senators.

It is an extremely worrying fact that the Military Industrial Complex would manipulate elected officials with fake news, especially considering that propaganda wasn’t legalized in America again until 2012. Previous legislation had been passed to protect citizens during the Church Committee hearings as part of a series of investigations into intelligence abuses during the mid-1970s, amended by the Smith-Mundt Act. Smith-Mundt was repealed in 2012 under Obama, as Business Insider reported, “The NDAA Legalizes The Use Of Propaganda On The US Public.”

As Arkansas Senator J. William Fulbright stated, VOA, Radio Free Europe, and many others “should be given the opportunity to take their rightful place in the graveyard of Cold War relics.” Fulbright’s amendment to Smith-Mundt was bolstered in 1985 by Nebraska Senator Edward Zorinsky, who argued that such “propaganda” should be kept out of America as to distinguish the U.S. “from the Soviet Union where domestic propaganda is a principal government activity.”

This is extremely dangerous; one perspective might see things in a different way because one person has acquired information, while the other lacks that information. For example, the U.S. government (specifically the CIA) used documented propaganda on the public and uses foreign propaganda against other countries. It’s not just the CIA, other nations’ intelligence services do it too.

While one person might feel that is insane, (and it quite literally is) the other person might know of the previous existence of Operation Mockingbird, which used CIA-employed journalists to produce fake stories during the Cold War-era 1950s through 1970s. They also funded student and cultural organizations and magazines as front organizations.  This CIA operation became known as Operation Mockingbird and was mentioned in the infamous CIA Family Jewels collection.

The U.K. smaller equivalent to Operation Mockingbird was known as Operation Mass Appeal. It was allegedly run by MI6 during 1997–98 and exaggerated Iraq’s weapons of mass destruction, according to former U.N. chief weapons inspector Scott Ritter. That claim was further exaggerated just a few years later in 2003 when the U.K. government Downing St. produced a fake Iraq war memo  that was exposed as being based on academic papers. It is a claim that would never have seen the light of day if it wasn’t for a doctor named David Kelly, one of the lead scientists who called the Iraq dossier a sham. Kelly was later found in the woods, and his death remains a mystery to this day.

Another example is how the media as a whole portrayed a video that was claimed to be from Syria (known as the “Syrian boy hero”) as real but was later revealed by Norwegian filmmakers to have been faked. As a result, the media had to backpedal their story issuing retractions.

Years later, in an unrelated incident, five people were arrested for using children in staged Aleppo videos, showing how dangerous it is to report any information out of Syria, as well as how important it is to have independent free thinkers.

Now, a UN panel (with little media attention) has revealed that the infamous White Helmets in Syria, the subjects of an Oscar-winning documentary, were engaged in criminal activity including but not limited to organ theft, staging rescues, and stealing from civilians.  As a further fun fact, the leader of the White Helmets, Raed Salah, was denied entry into the U.S.  at Washington’s Dulles International Airport and deported, due to “extremist connections” while on his way to receive a humanitarian relief award at a gala dinner hosted by USAID.

Really none of this should come as a surprise since White Helmets are connected to the Free Syrian Army (FSA), which in turn is connected to AlQaeda and Al-Nusra.

Perhaps a better example, and one that doesn’t involve propaganda, which more people can relate to is the situation in Flint, Michigan where water was poisoned due to negligence that was attempted to be covered up by the local government. YouTube as a medium allowed those citizens to have a voice and show the carelessness by their government officials. Further, the government even removed the citizens’ power to sue the state of Michigan over the lead contamination of its water supply.

For a moment imagine that this was called fake; these people would have been ignored far more than they were by the national mainstream media. Policing information is outright reckless and could endanger lives.

Then there is the spraying of carcinogenic chemicals on unknowing residents in the U.S. and Canada by the Army under Operation DEW and Operation Large Area Coverage (LAC) during the Cold War in testing linked to weaponry involving radioactive ingredients meant to attack the Soviet Union. Which, if I am being frank, sounds absolutely bonkers; but if you study history, you will see that this is the least that was done during that time frame, i.e. the infamous program known as Project MKUltra. During that covert program, people all over the place were tested with various experiments, many times against their own will.

So to say that YouTube will link to one source that can be edited by anyone and claim it as the moral high ground of “truth” is crazy, but to then introduce a recommendation block on “conspiratorial information” is outright insanity, which suppresses research efforts.

It doesn’t matter what your views are or what you think about a particular subject YouTube is aiming to censor the free flow of information, and this could be dangerous for a democratic society. This means that channels promoting free thinking and questioning of news events will now face further demoting within YouTube’s algorithms. These actions endanger a free and open society; no one should be able to decide what a user can and can’t search for no individual platform should be able to decide what is and isn’t the truth for their users. While in the same respect no one should be able to decide who does and doesn’t have a voice. (That’s the silencing of freedom of opinion and expression.)

YouTube is walking us straight into George Orwell’s nightmare 1984 through its proposed actions to silence free thinkers deemed “conspiracy theorists.” I will be the first one to tell you some theories are bat shit crazy such as the theory of flat Earth. But that doesn’t mean I want to censor the content. As another example, the rise of an anonymous insider who has been wrong more times then I can count on two hands:  Q. However, again I don’t want YouTube as a corporate giant to have the ability to censor anyone who speaks about the Quidiot conspiracy. Because if you give them an inch they will take a mile and begin censoring other topics or even individuals as they already have including Activist Post‘s own YouTube channel.

If someone wants to promote a ridiculous theory they should be free to do so.  After all, it’s their own credibility at stake. A democratic society is free and open and full of debates; and while YouTube wants to promote that theories about 9/11 are ludicrous, there are far more dots that don’t add up than they or the general public care to see or admit. (I won’t go into the topic as it would take far too long to dive into, but I’ll make a few quick suggestions of names and events to research – Michael Riconisciuto, John Patrick O’Neill, Bill Cooper, Able Danger, dancing Israelis, WTC7, bombs on George Washington bridge, et al.)

See my article: “The Official Narrative Of 911 A Bigger Conspiracy Theory” for just some of the various evidence against the official story.

It’s particularly worrying that they single out theories of 9/11 — one of the worst tragedies in American history shrouded in mystery — in the blog post. Since, again, there is more that doesn’t add up than makes sense in regards to 9/11. There are several holes such as the various war game drills that James Corbett goes into in detail within his documentary War Games.  We may never know what happened on 9/11, but there is way more to it than the official government narrative, and we the people have a right to know or at the very least seek out potential answers.

While YouTube wants you to think the governments of the world aren’t involved in any sort of corruption, “conspiratorial plots,” or cover-ups, history has proven quite the opposite. All of this information now risks being censored under YouTube’s policy and algorithm changes a scary and worrying prospect. It seems as though they want to protect the establishment rather than allow people to freely think for themselves. This is about the human right not to be indoctrinated with information, but rather to make up our own minds. Even if we are wrong about a particular subject (such as those of you who think the Earth is flat), this allows for healthy debate among individuals and the stopping of tyranny or tyrannical rule by dictatorships

For now, at the very least, we can be thankful that YouTube is stating that it will not outright ban all content it designates as a conspiracy theory (yet), despite the recent purge of dozens upon dozens of accounts that are connected to free speech and free thought. There are also always alternatives such as DTube, BitChute, and many others for uploading content. We need to ask ourselves is the YouTube purge the end of freedom of expression or the great awakening for alternatives?

YouTube’s moves against free thinkers could backfire for the company quite severely, because truth is stranger than fiction. Although this writer can agree with YouTube that the world is a spheroid, definitely not flat or completely round for that matter, it is important to have free independent thought and speech. Even if that means I have to share the planet with flat-Earthers or people who believe every crazed murder spree is a false flag attack (granted some might be because Operation Northwoods against Cuba and a memo suggested a false flag attack against Russia during the Cold War using civilians as cannon fodder, so it’s not that insane to suggest.)

The rapid changes we are witnessing with the main drivers of Internet perception has even drawn the attention of one of the inventors of the World Wide Web, Tim Berners-Lee. He noted in an open letter that “What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms.” Do we really want those dominant platforms telling us their exclusive version of the truth?

Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

from:   https://www.activistpost.com/2019/01/1984-youtube-will-demote-conspiracy-videos-in-its-recommendation-algorithm-a-scary-prospect-for-freedom-of-thought.html

Airbnb is Watching

Airbnb Patrons Are Finding More and More Cameras In Their Rooms — Here’s How To Check For Cameras

By Aaron Kesel

Airbnb is having more and more of its hosts hiding security cameras in rooms, and it doesn’t seem to be worried about the practice if innkeepers are disclosing the cameras and they aren’t in the bathrooms or bedrooms, according to a report by Fast Company.

“If you find a truly hidden camera in your bedroom or bathroom, AirBnB will support you. If you find an undisclosed camera in the private living room, AirBnB will not support you,” Jeffrey Bigham, a computer science professor at Carnegie Mellon University told Fast Company.

Bigham blogged about his recent experience at an Airbnb where he found cameras in his “private living room,” writing in a blog post, “A Camera is Watching You in Your AirBnB: And, you consented to it.”

“I just assume that there will be camera constantly recording when I stay in airbnb, or anywhere really. They way I never have to worry about whether it exist or not. As recording technology becoming more and more advance, it’s less and less reasonable to expect privacy. I rather adapt my life to fit this new culture,” Bigham wites.

Airbnb argued that since one single camera was visible in pictures advertising the rooms, the owner of the Airbnb rooms for rent disclosed the security cameras.

Airbnb has since apologized and has given Bigham a refund, according to CNET. A spokesperson provided the publication with the following statement:

Our community’s privacy and safety is our priority, and our original handling of this incident did not meet the high standards we set for ourselves. We have apologized to Mr. Bigham and fully refunded him for his stay. We require hosts to clearly disclose any security cameras in writing on their listings and we have strict standards governing surveillance devices in listings.  This host has been removed from our community.

However, Bigham is far from the only Airbnb customer to find cameras in a room they rented; and while Bigham found his in a “private living quarters,” others have found them in more private places like the bathroom and bedrooms.

Another case happened last September in Toronto, Canada, where a couple — Dougie Hamilton and his girlfriend — rented an Airbnb flat and discovered hidden cameras in their bedroom, News.com.au reported.

Hamilton told the Daily Record:

We were only in the place for 20 minutes when I noticed the clock. We’d had a busy day around the city and finally were able to get to the Airbnb and relax.

I just happened to be facing this clock and was staring at it for about 10 minutes. There was just something in my head that made me feel a bit uneasy.

It was connected to a wire like a phone charger which wasn’t quite right. The weirdest thing was, I’d seen a video on Facebook about cameras and how they could be hidden and they had a clock with one in it, too.

Last fall, a couple on a Florida vacation found a camera hidden in a smoke detector in the bedroom of their Longboat Key condo.

Another less recent case was posted on Reddit four years ago claiming a couple found a camera from a Dropcam, a connected home security system by Google’s Nest. The couple found the camera hidden in a mesh basket before unplugging it, according to the post.

According to Airbnb’s rules, the company states:

Our Standards & Expectations require that all members of the Airbnb community respect each other’s privacy. More specifically, we require hosts to disclose all surveillance devices in their listings, and we prohibit any surveillance devices that are in or that observe the interior of certain private spaces (such as bedrooms and bathrooms) regardless of whether they’ve been disclosed.

So how does one determine if there are potentially hidden cameras in a room? While there is no foolproof method for discovering hidden cameras in a room, there are ways that you can try to find them. Start off by shining a flashlight from your phone in the dark. Look for a light that bounces off a camera lens. According to Digital Trends, this will help you spot lenses that are otherwise hidden to the human eye in the shadows or built into objects such as clocks, walls, bureaus, and other furniture.

Other places that cameras can be hidden include:

  • Motion sensors
  • Smoke detectors
  • Alarm clocks
  • Wall clocks
  • Plug in air fresheners (especially if they don’t give off any scent)
  • Stuffed animals
  • Books on a shelf (where a camera is embedded in the spine of a fake book)
  • Cooking canisters and spice racks

The next way to find hidden surveillance devices has to deal with scanning for WiFi-enabled cameras on the local network. Motherboard has provided a relatively easy shell script to not only find the cameras but to disable them.

However, Julian Oliver explains that it may be illegal to run the script due to changes made by the FCC.

If you do find cameras, Airbnb adds that you can cancel your reservation for a full refund if the cameras aren’t disclosed or are found in an unreasonable area such as the bathroom or bedrooms. This is troubling and it’s not only affecting Airbnb but other services like it, such as VRBO and HomeAway. Both companies also have similar policies; VRBO says cameras are never to be placed in an area where guests “can reasonably expect privacy.” However, the problem is that some owners don’t follow those rules, so you must only trust yourself.

Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Image credit: Pixabay

from:    https://www.activistpost.com/2019/01/airbnb-patrons-are-finding-more-and-more-cameras-in-their-rooms-heres-how-to-check-for-cameras.html

What Does Your License Plate Say About You?

Big Brother Digital License Plates Coming To A State Near You

By MassPrivateI

Michigan became the second state in the country to roll out the world’s first digital license plate.

According to an article in The Car Connection, the state of Michigan just approved Reviver Auto’sdigital license plates called the “Rplate.”

Reviver Auto boasts that a total of five states have already approved their digital license plates.

Our innovative, multi-functional digital license plate, the Rplate Pro, will be on the road in California, Florida, Arizona and Texas in 2018. (Source)

George Orwell could never have dreamed of a world where license plates became a tool for more government surveillance.

Digital license plates are a privacy nightmare for motorists.

No longer will law enforcement have to run your license plate to see if you paid your taxes and insurance, because now your license plate will display a big “X” notifying everyone that you are a violator.

Digital license plates are the epitome of Big Brother surveillance

According to an article in WIRED, Rplates will turn vehicles into rolling Big Brother billboards that display Amber Alerts and much more.

It lets you update the registration stickers on your car through an app instead of dealing with the DMV. It can display Amber Alerts. It can be used as a miniature, knee-level billboard. If someone steals the car, it can read $NDHLP! or the more serious Stolen Vehicle. It can double as your E-Z Pass, FasTrak, or whatever RFID-based device you use to pay tolls. It can track your car’s location, so you can keep tabs on your teenager.

According to Reviver Auto, motorists have to pay a $99 annual subscription fee and a monthly $8.00 LTE fee for the privilege of being surveilled.

Who does not want to pay for the privilege of having a license plate that tracks your every movement and displays government approved messages?

But that is not all digital license plates can do.

The true purpose of using digital license plates that track your whereabouts is corporate greed.

The Car Connection article warns that corporations will use digital license plates to send “targeted messaging” to other drivers.

Another feature that may raise eyebrows is targeted messaging, which is a way to display advertisements on the license plate. The plate’s numbers can be minimized to leave more room for an ad when the car is parked. It’s not clear if Michigan would allow vehicle owners to opt into or out of ads, however.

How is that for creepy?  License plates that send targeted advertising based on your location?

Make no mistake, digital license plates are nothing more than Big Brother/corporate surveillance devices.

from:    https://www.activistpost.com/2019/01/big-brother-digital-license-plates-coming-to-a-state-near-you.html

Time To Protect Your IN-PHONE-FO

Change your phone settings so Apple, Google can’t track your movements

Your phone tracks your movements all the time. grapestock/Shutterstock.com

Technology companies have been pummeled by revelations about how poorly they protect their customers’ personal information, including an in-depth New York Times report detailing the ability of smartphone apps to track users’ locations. Some companies, most notably Apple, have begun promoting the fact that they sell products and services that safeguard consumer privacy.

Smartphone users are never asked explicitly if they want to be tracked every moment of each day. But cellular companies, smartphone makers, app developers and social media companies all claim they have users’ permission to conduct near-constant personal surveillance.

The underlying problem is that most people don’t understand how tracking really works. The technology companies haven’t helped teach their customers about it, either. In fact, they’ve intentionally obscured important details to build a multi-billion-dollar data economy based on an ethically questionable notion of informed consent.

How consumers are made to agree

Most companies disclose their data protection practices in a privacy policy; most software requires users to click a button saying they accept the terms before using the program.

But people don’t always have a free choice. Instead, it’s a “take-it-or-leave-it” agreement, in which a customer can use the service only if they agree.

Consumers often do not have a free choice when it comes to privacy agreements. Marta Design/Shutterstock.com

Anyone who actually wants to understand what the policies say finds the details are buried in long legal documents unreadable by nearly everyone, perhaps except the lawyers who helped create them.

Often, these policies will begin with a blanket statement like “your privacy is important to us.” However, the actual terms describe a different reality. It’s usually not too far-fetched to say that the company can basically do whatever it wants with your personal information, as long as it has informed you about it.

U.S. federal law does not require that a company’s privacy policy actually protect users’ privacy. Nor are there any requirements that a company must inform consumers of its practices in clear, nonlegal language or provide consumers a notice in a user-friendly way.

Theoretically, users might be able to vote with their feet and find similar services from a company with better data-privacy practices. But take-it-or-leave-it agreements for technologically advanced tools limit the power of competition across nearly the entire technology industry.

Data sold to third parties

There are a few situations where mobile platform companies like Apple and Google have let people exercise some control over data collection.

For example, both companies’ mobile operating systems let users turn off location services, such as GPS tracking. Ideally, this should prevent most apps from collecting your location – but it doesn’t always. Further, it does nothing if your mobile provider resells your phone’s location information to third parties.

App makers are also able to persuade users not to turn off location services, again with take-it-or-leave-it notifications. When managing privileges for iOS apps, users get to choose whether the app can access the phone’s location “always,” “while using the app” or “never.”

But changing the setting can trigger a discouraging message: “We need your location information to improve your experience,” says one app. Users are not asked other important questions, like whether they approve of the app selling their location history to other companies.

And many users don’t know that even when their name and contact information is removed from location data, even a modest location history can reveal their home addresses and the places they visit most, offering clues to their identities, medical conditions and personal relationships.

Why people don’t opt out

Websites and apps make it difficult, and sometimes impossible, for most people to say no to aggressive surveillance and data collection practices. In my role as a scholar of human-computer interaction, one issue I study is the power of defaults.

When companies set a default in a system, such as “location services set to on,” people are unlikely to change it, especially if they are unaware there are other options they could choose.

Further, when it is inconvenient to change the location services, as is the case on both iOS and Android systems today, it’s even less likely that people will opt out of location collection – even when they dislike it.

Companies’ take-it-or-leave-it privacy policies and default choices for users’ privacy settings have created an environment where people are unaware that their lives are being subjected to minute-by-minute surveillance.

They’re also mostly not aware that information that could identify them individually is resold to create ever-more-targeted advertising. Yet the companies can legally, if not ethically, claim that everyone agreed to it.

Overcoming the power of defaults

Monitor your phone’s default settings. Georgejmclittle/Shutterstock.com

Privacy researchers know that people dislike these practices, and that many would stop using these services if they understood the extent of the data collection. If invasive surveillance is the price of using free services, many would rather pay or at least see companies held to stronger data collection regulations.

The companies know this too, which is why, I argue, they use a form of coercion to ensure participation.

Until the U.S. has regulations that, at a minimum, require companies to ask for explicit consent, individuals will need to know how to protect their privacy. Here are my three suggestions:

  • Start by learning how to turn off location services on your iPhone or Android device.
  • Turn location on only when using an app that clearly needs location to function, such as a map.
  • Avoid apps, such as Facebook Mobile, that dig deeply into your phone for as much personal information as possible; instead, use a browser with a private mode, like Firefox, instead.

Don’t let default settings reveal more about you than you want.

from:    Change your phone settings so Apple, Google can’t track your movements January 14, 2019 6.41am EST Your phone tracks your movements all the time. grapestock/Shutterstock.com Author Jen King Director of Consumer Privacy, Center for Internet and Society, Stanford University Disclosure statement The Center for Internet and Society receives funding from multiple organizations; information is available here: http://cyberlaw.stanford.edu/about-us Partners View all partners Republish this article Republish Republish our articles for free, online or in print, under Creative Commons license. Email Twitter103 Facebook421 LinkedIn Print Technology companies have been pummeled by revelations about how poorly they protect their customers’ personal information, including an in-depth New York Times report detailing the ability of smartphone apps to track users’ locations. Some companies, most notably Apple, have begun promoting the fact that they sell products and services that safeguard consumer privacy. Smartphone users are never asked explicitly if they want to be tracked every moment of each day. But cellular companies, smartphone makers, app developers and social media companies all claim they have users’ permission to conduct near-constant personal surveillance. The underlying problem is that most people don’t understand how tracking really works. The technology companies haven’t helped teach their customers about it, either. In fact, they’ve intentionally obscured important details to build a multi-billion-dollar data economy based on an ethically questionable notion of informed consent. How consumers are made to agree Most companies disclose their data protection practices in a privacy policy; most software requires users to click a button saying they accept the terms before using the program. But people don’t always have a free choice. Instead, it’s a “take-it-or-leave-it” agreement, in which a customer can use the service only if they agree. Consumers often do not have a free choice when it comes to privacy agreements. Marta Design/Shutterstock.com Anyone who actually wants to understand what the policies say finds the details are buried in long legal documents unreadable by nearly everyone, perhaps except the lawyers who helped create them. Often, these policies will begin with a blanket statement like “your privacy is important to us.” However, the actual terms describe a different reality. It’s usually not too far-fetched to say that the company can basically do whatever it wants with your personal information, as long as it has informed you about it. U.S. federal law does not require that a company’s privacy policy actually protect users’ privacy. Nor are there any requirements that a company must inform consumers of its practices in clear, nonlegal language or provide consumers a notice in a user-friendly way. Theoretically, users might be able to vote with their feet and find similar services from a company with better data-privacy practices. But take-it-or-leave-it agreements for technologically advanced tools limit the power of competition across nearly the entire technology industry. Data sold to third parties There are a few situations where mobile platform companies like Apple and Google have let people exercise some control over data collection. For example, both companies’ mobile operating systems let users turn off location services, such as GPS tracking. Ideally, this should prevent most apps from collecting your location – but it doesn’t always. Further, it does nothing if your mobile provider resells your phone’s location information to third parties. App makers are also able to persuade users not to turn off location services, again with take-it-or-leave-it notifications. When managing privileges for iOS apps, users get to choose whether the app can access the phone’s location “always,” “while using the app” or “never.” But changing the setting can trigger a discouraging message: “We need your location information to improve your experience,” says one app. Users are not asked other important questions, like whether they approve of the app selling their location history to other companies. And many users don’t know that even when their name and contact information is removed from location data, even a modest location history can reveal their home addresses and the places they visit most, offering clues to their identities, medical conditions and personal relationships. Why people don’t opt out Websites and apps make it difficult, and sometimes impossible, for most people to say no to aggressive surveillance and data collection practices. In my role as a scholar of human-computer interaction, one issue I study is the power of defaults. When companies set a default in a system, such as “location services set to on,” people are unlikely to change it, especially if they are unaware there are other options they could choose. Further, when it is inconvenient to change the location services, as is the case on both iOS and Android systems today, it’s even less likely that people will opt out of location collection – even when they dislike it. Companies’ take-it-or-leave-it privacy policies and default choices for users’ privacy settings have created an environment where people are unaware that their lives are being subjected to minute-by-minute surveillance. They’re also mostly not aware that information that could identify them individually is resold to create ever-more-targeted advertising. Yet the companies can legally, if not ethically, claim that everyone agreed to it. Overcoming the power of defaults Monitor your phone’s default settings. Georgejmclittle/Shutterstock.com Privacy researchers know that people dislike these practices, and that many would stop using these services if they understood the extent of the data collection. If invasive surveillance is the price of using free services, many would rather pay or at least see companies held to stronger data collection regulations. The companies know this too, which is why, I argue, they use a form of coercion to ensure participation. Until the U.S. has regulations that, at a minimum, require companies to ask for explicit consent, individuals will need to know how to protect their privacy. Here are my three suggestions: Start by learning how to turn off location services on your iPhone or Android device. Turn location on only when using an app that clearly needs location to function, such as a map. Avoid apps, such as Facebook Mobile, that dig deeply into your phone for as much personal information as possible; instead, use a browser with a private mode, like Firefox, instead. Don’t let default settings reveal more about you than you want.

from:https://theconversation.com/change-your-phone-settings-so-apple-google-cant-track-your-movements-109059

Taking Back Your Internet Data

ALTERNATIVE NEWS

WWW Inventor’s New Internet OS Would Allow Users To Control Their Personal Data

By.  

IN BRIEF
  • The Facts:Tim Berners-Lee, inventor of the ‘World Wide Web,’ has created a new startup company named Inrupt which is poised to ‘interrupt’ the data domination and invasion of privacy of big internet companies like Facebook and Google.
  • Reflect On:Can you envision an internet in which each one of us is the gatekeeper of our own data and we can all operate on the internet in an equitable way?

Facebook, Google, and the rest of the censorship and data mining cabal–you have now officially been put on notice.

Something that I hinted at in a previous article ‘Anti-Defamation League, Facebook, Google & Youtube Appoint Themselves As Official Internet Censor‘ has taken on new significance. When I said that I’m not sure the “censorship cabal” led by Facebook should really be messing with an Awakening Community, it turns out we have some pretty powerful people in the Awakening Community.

Tim Berners-Lee, inventor of the ‘World Wide Web’ and one of Time magazine’s ‘100 most important people of the 20th century,’ had the noblest intentions when he turned the keys of the internet over to the world for free in 1989. This awakened genius saw the potential for increased openness, connectivity, and productivity on the platform which was fundamentally designed as a medium for positive change and human empowerment.

The Emergence Of  A Frankenstein

Instead, Berners-Lee has seen his creation turn into some kind of Frankenstein, as noted in this Zero Hedge article:

“For people who want to make sure the Web serves humanity, we have to concern ourselves with what people are building on top of it,” Tim Berners-Lee told Vanity Fair last month. “I was devastated,” he said, while going through a litany of harmful and dangerous developments of the past three decades of the web. He lamented that his creation has been abused by powerful entities for everything from mass surveillance to fake news to psychological manipulation to corporations commodifying individuals’ information.

Berners-Lee has worked in recent years in and out of different companies and advocacy groups trying to preserve the sanctity of the internet and retain its initial purpose and vision, but despite his efforts, he has seen its gradual takeover by powerful entities who have been able to centralize much of the internet’s activities, and along with this have been able to horde much of its valuable information.

Dreams Of Freedom And Openness

In the face of this, Berners-Lee and other internet activists have long been dreaming of a digital utopia where individuals control their own data and the internet remains free and open. But for Berners-Lee, the time for dreaming is over. “We have to do it now. It’s a historical moment,” he has said. Ever since revelations emerged that Facebook had allowed people’s data to be misused by political operatives, Berners-Lee has felt an imperative to get this digital idyll into the real world.

And so, Berners-Lee has launched a start-up that intends to end the dominance of Facebook, Google, and Amazon, while in the process letting individuals take back control of their own data.

Solid and Inrupt

This began with ‘Solid,’ which is a decentralized web platform that Berners-Lee designed and built with a small team at MIT over several years. It can be considered as a kind of operating system for the internet that will serve as the foundation for applications which support the decentralization of information.

On Solid, all of one’s information is under the user’s control. Every bit of data he or she creates or adds on Solid exists within a Solid ‘pod’–which is an acronym for personal online data store. These pods are what give Solid users control over their applications and information on the web. Anyone using the platform will get a Solid identity and Solid pod. This is how people, Berners-Lee says, will take back the power of the web from corporations.

He then created Inrupt, Berners-Lee’s new online platform and companythat serves as a user interface to these pods, where everything from messages, music, contacts or other personal data will be stored in one place overseen by the user instead of an array of platforms and apps run by corporations seeking to profit off personal information. The project seeks “personal empowerment through data” and aims to “take back” the web, according to company statements.

Inrupt Will Just Be One Of Many

Once again, as per the Zerohedge article,

Unlike Facebook or Twitter where all user information ultimately resides in centralized data centers and servers under control of the companies, applications on Inrupt will compete for users based on the services they can offer, and only the users can grant these apps “views” into their data, making personal data instantly portable between similar applications.

“The main enhancement is that the web becomes a collaborative read-write space, passing control from owners of a server, to the users of that system. The Solid specification provides this functionality,” the Solid website says.

If all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. And like with Netscape, Berners-Lee hopes Inrupt will be just the first of many companies to emerge from Solid. In this way, creative developers will be able to compete with their latest and greatest interfaces to internet information, but unlike the opportunity seized by the likes of Facebook and Google, these new interfaces will never be able to ‘own’ or ‘house’ people’s personal data, and therefore the corrupt and fraudulent abuse of that data by big corporations will disappear from the internet. This failsafe is now built into the architecture of the Solid operating system.

The Takeaway

Many of us in the Awakening Community have been upset by the assault on our privacy and our freedom by the large internet corporations, but we can take solace in the fact that sometimes these very acts of injustice are what triggers consciousness to move us forward, and enables awakened humans to fulfill their dreams of creating the next great thing that will truly empower humanity.

from:    https://www.collective-evolution.com/2018/10/06/tim-berners-lee-internet-os-control-personal-data/