Senior Technical Writer @ Opplane (Lisbon, Portugal). PhD in Communication Sciences (ISCTE-IUL). Past: technology journalist, blogger & communication researcher.
#TechnicalWriting #WebDev #WebDevelopment #OpenSource #FLOSS #SoftwareDevelopment #IP #PoliticalEconomy #Communication #Media #Copyright #Music #Cities #Urbanism
@tseitr@mastodon.sdf.org “Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data.”
"In just 20 minutes this morning, an automated license plate recognition (ALPR) system in Nashville, Tennessee captured photographs and detailed information from nearly 1,000 vehicles as they passed by. Among them: eight black Jeep Wranglers, six Honda Accords, an ambulance, and a yellow Ford Fiesta with a vanity plate.
This trove of real-time vehicle data, collected by one of Motorola’s ALPR systems, is meant to be accessible by law enforcement. However, a flaw discovered by a security researcher has exposed live video feeds and detailed records of passing vehicles, revealing the staggering scale of surveillance enabled by this widespread technology.
More than 150 Motorola ALPR cameras have exposed their video feeds and leaking data in recent months, according to security researcher Matt Brown, who first publicised the issues in a series of YouTube videos after buying an ALPR camera on eBay and reverse engineering it."
https://www.wired.com/story/license-plate-reader-live-video-data-exposed/
@dohpaz42@lemmy.world Yes, because they do worse… :-/
It’s becoming increasingly difficult to differentiate some US states from Iran or Afghanistan…
@ointersexo Durante muitos anos não tive celular - só tablet. O problema é que cada vez mais muitos serviço básicos - banco, cartão de refeição, etc. - só funcionam com smartphone porque exigem uma app. Isso aí complica o cenário. Os reguladores para a concorrência deviam obrigar esses provedores a fornecerem uma versão web dessas mesmas app sem necessidade de recorrer a um celular.
@ointersexo Sim, vejo cada vez mais gente a optar por um velho “tijolo”
“The utility of the activity data in risk mitigation and behavioural modification is questionable. For example, an actuary we interviewed, who has worked on risk pricing for behavioural Insurtech products, referred to programs built around fitness wearables for life/health insurance, such as Vitality, as ‘gimmicks’, or primarily branding tactics, without real-world proven applications in behavioural risk modification. The metrics some of the science is based on, such as the BMI or 10,000 steps requirement, despite being so widely associated with healthy lifestyles, have ‘limited scientific basis.’ Big issues the industry is facing are also the inconsistency of use of the activity trackers by policyholders, and the unreliability of the data collected. Another actuary at a major insurance company told us there was really nothing to stop people from falsifying their data to maintain their status (and rewards) in programs like Vitality. Insurers know that somebody could just strap a FitBit to a dog and let it run loose to ensure the person reaches their activity levels per day requirement. The general scepticism (if not broad failure) of products and programs like Vitality to capture data useful for pricing premiums or handling claims—let alone actually induce behavioural change in meaningful, measurable ways—is widely acknowledged in the industry, but not publicly discussed.”
https://www.sciencedirect.com/science/article/pii/S0267364924001614
"On Tuesday the Consumer Financial Protection Bureau (CFPB) published a long anticipated proposed rule change around how data brokers handle peoples’ sensitive information, including their name and address, which would introduce increased limits on when brokers can distribute such data. Researchers have shown how foreign adversaries are able to easily purchase such information, and 404 Media previously revealed that this particular data supply chain is linked to multiple acts of violence inside the cybercriminal underground that has spilled over to victims in the general public too.
The proposed rule in part aims to tackle the distribution of credit header data. This is the personal information at the top of a credit report which doesn’t discuss the person’s actual lines of credit. But currently credit header data is distributed so widely, to so many different companies, that it ends up in the hands of people who use it maliciously."
"The United States government’s leading consumer protection watchdog announced Tuesday the first steps in a plan to crack down on predatory data broker practices that the agency says help fuel scams, violence, and threats to US national security.
The Consumer Financial Protection Bureau is proposing a rule that would allow regulators to police data brokers under the Fair Credit Reporting Act (FCRA), a landmark privacy law enacted more than a half century ago. Under the proposal, data brokers would be limited in their ability to sell certain sensitive personal information, including financial data and credit scores, phone numbers, Social Security numbers, and addresses. The CFPB says that closing the loopholes allowing data brokers to trade in this data with little to no oversight will benefit vulnerable people and the US as a whole."
https://www.wired.com/story/cfpb-fcra-data-broker-oversight/
"End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.
This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features."
https://eprint.iacr.org/2024/2086.pdf