Microsoft acquires conversational AI startup Semantic Machines to help bots sound more lifelike

Microsoft acquires conversational AI startup Semantic Machines to help bots sound more lifelike

Microsoft announced today that it has acquired Semantic Machines, a Berkeley-based startup that wants to solve one of the biggest challenges in conversational AI: making chatbots sound more human and less like, well, bots.

In a blog post, Microsoft AI & Research chief technology officer David Ku wrote that “with the acquisition of Semantic Machines, we will establish a conversational AI center of excellence in Berkeley to push forward the boundaries of what is possible in language interfaces.”

According to Crunchbase, Semantic Machines was founded in 2014 and raised about $20.9 million in funding from investors including General Catalyst and Bain Capital Ventures.

In a 2016 profile, co-founder and chief scientist Dan Klein told TechCrunch that “today’s dialog technology is mostly orthogonal. You want a conversational system to be contextual so when you interpret a sentence things don’t stand in isolation.” By focusing on memory, Semantic Machines’ AI can produce conversations that not only answer or predict questions more accurately, but also flow naturally.

Instead of building its own consumer products, Semantic Machines focused on enterprise customers. This means it will fit in well with Microsoft’s conversational AI-based products, including Microsoft Cognitive Services and Azure Bot Service, which are used by one million and 300,000 developers, respectively, and virtual assistants Cortana and Xiaolce.


Source: Tech Crunch

Progressive advocacy groups call on the FTC to “make Facebook safe for democracy”

Progressive advocacy groups call on the FTC to “make Facebook safe for democracy”

A team of progressive advocacy groups, including MoveOn and Demand Progress, are asking the Federal Trade Commission to “make Facebook safe for democracy.” According to Axios, the campaign, called Freedom From Facebook, will launch a six-figure ad campaign on Monday that will run on Facebook, Instagram and Twitter, among other platforms.

The other advocacy groups behind the campaign are Citizens Against Monopoly, Content Creators Coalition, Jewish Voice for Peace, Mpower Change, Open Markets Institute and SumOfUs. Together they are calling on the FTC to “break up Facebook’s monopoly” by forcing it to spin off Instagram, WhatsApp and Messenger into separate, competing companies. They also want the FTC to require interoperability so users can communicate against competing social networks and strengthen privacy regulations.

Freedom From Facebook’s site also includes an online petition and privacy guide that links to FB Purity and the Electronic Frontier Foundation’s Privacy Badger, browser extensions that help users streamline their Facebook ad preferences and block online trackers, respectively.

The FTC recently gained a new chairman after President Donald Trump’s pick for the position Joseph Simons was sworn in early this month, along with four new commissioners also nominated by Trump. Simons is an antitrust lawyer who has represented large tech firms like Microsoft and Sony. The FTC is currently investigating whether or not Facebook’s involvement with Cambridge Analytica violated a previous legal agreement it had with the commission, but many people are wondering if it and other federal agencies are capable of regulating tech companies, especially after many lawmakers seemed confused about how social media works during Facebook CEO Mark Zuckerberg’s Congressional hearing last month.

Despite its data privacy and regulatory issues, Facebook is still doing well from a financial perspective. Its first-quarter earnings report showed strong user growth and revenue above Wall Street’s expectations.

TechCrunch has contacted Freedom From Facebook and Facebook for comment.


Source: Tech Crunch

Are algorithms hacking our thoughts?

Are algorithms hacking our thoughts?

As Facebook shapes our access to information, Twitter dictates public opinion, and Tinder influences our dating decisions, the algorithms we’ve developed to help us navigate choice are now actively driving every aspect of our lives.

But as we increasingly rely on them for everything from how we seek out news to how we relate to the people around us, have we automated the way we behave? Is human thinking beginning to mimic algorithmic processes? And is the Cambridge Analytica debacle a warning sign of what’s to come–and of happens when algorithms hack into our collective thoughts?

It wasn’t supposed to go this way. Overwhelmed by choice–in products, people, and the sheer abundance of information coming at us at all times–we’ve programmed a better, faster, easier way to navigate the world around us. Using clear parameters and a set of simple rules, algorithms help us make sense of complex issues. They’re our digital companions, solving real-world problems we encounter at every step, and optimizing the way we make decisions. What’s the best restaurant in my neighborhood? Google knows it. How do I get to my destination? Apple Maps to the rescue. What’s the latest Trump scandal making the headlines? Facebook may or may not tell you.

Wouldn’t it be nice if code and algorithms knew us so well — our likes, our dislikes, our preferences — that they could anticipate our every need and desire? That way, we wouldn’t have to waste any time thinking about it: We could just read the one article that’s best suited to reinforce our opinions, date whoever meets our personalized criteria, and revel in the thrill of familiar surprise. Imagine all the time we’d free up, so we could focus on what truly matters: carefully curating our digital personas and projecting our identities on Instagram.

It was Karl Marx who first said our thoughts are determined by our machinery, an idea that Ellen Ullman references in her 1997 book, Close to the Machine, which predicts many of the challenges we’re grappling with today. Beginning with the invention of the Internet, the algorithms we’ve built to make our lives easier have ended up programming the way we behave.

Photo courtesy of Shutterstock/Lightspring

Here are three algorithmic processes and the ways in which they’ve hacked their way into human thinking, hijacking our behavior.

1. Product Comparison: From Online Shopping to Dating

Amazon’s algorithm allows us to browse and compare products, save them for later, and eventually make our purchase. But what started as a tool designed to improve our e-commerce experience now extends much beyond that. We’ve internalized this algorithm and are applying it to other areas of our lives–like relationships.

Dating today is much like online shopping. Enabled by social platforms and apps, we browse endless options, compare their features, and select the one that taps into our desires and perfectly fits our exact personal preferences. Or just endlessly save it for later, as we navigate the illusion of choice that permeates both the world of e-commerce and the digital dating universe.

Online, the world becomes an infinite supply of products, and now, people. “The web opens access to an unprecedented range of goods and services from which you can select the one thing that will please you the most,” Ullman explains in Life in Code. “[There is the idea] that from that choice comes happiness. A sea of empty, illusory, misery-inducing choice.”

We all like to think that our needs are completely unique–and there’s a certain sense of seduction and pleasure that we derive from the promise of finding the one thing that will perfectly match our desires.

Whether it’s shopping or dating, we’ve been programmed to constantly search, evaluate and compare. Driven by algorithms, and in a larger sense, by web design and code, we’re always browsing for more options. In Ullman’s words, the web reinforces the idea that “you are special, your needs are unique, and [the algorithm] will help you find the one thing that perfectly meets your unique need and desire.”

In short, the way we go about our lives mimics the way we engage with the Internet. Algorithms are an easy way out, because they allow us to take the messiness of human life, the tangled web of relationships and potential matches, and do one of two things: Apply a clear, algorithmic framework to deal with it, or just let the actual algorithm make the choice for us. We’re forced to adapt to and work around algorithms, rather than use technology on our terms.

Which leads us to another real-life phenomenon that started with a simple digital act: rating products and experiences.

2. Quantifying People: Ratings & Reviews

As with all other well-meaning algorithms, this one is designed with you and only you in mind. Using your feedback, companies can better serve your needs, provide targeted recommendations just for you, and serve you more of what you’ve historically shown to like, so you can carry on mindlessly consuming it.

From your Uber ride to your Postmate delivery to your Handy cleaning appointment, nearly every real-life interaction is rated on a scale of 1-5 and reduced to a digital score.

As a society we’ve never been more concerned with how we’re perceived, how we perform, and how we compare to others’ expectations. We’re suddenly able to quantify something as subjective as our Airbnb host’s design taste or cleanliness. And the sense of urgency with which we do it is incredible — you’re barely out of your Uber car when you neurotically tap all five stars, tipping with wild abandon in a quest to improve your passenger rating. And the rush of being reviewed in return! It just fills you with utmost joy.

Yes, you might be thinking of that dystopian Black Mirror scenario, or that oddly relatable Portlandia sketch, but we’re not too far off from a world where our digital score simultaneously replaces and drives all meaning in our lives.

We’ve automated the way we interact with people, where we’re constantly measuring and optimizing those interactions in an endless cycle of self-improvement. It started with an algorithm, but it’s now second nature.

As Jaron Lainier wrote in his introduction to Close to the Machine, “We create programs using ideas we can feed into them, but then [as] we live through the program. . .we accept the ideas embedded in it as facts of nature.”

That’s because technology makes abstract and often elusive, desirable qualities quantifiable. Through algorithms, trust translates into ratings and reviews, popularity equals likes, and social status means followers. Algorithms create a sort of Baudrillardian simulation, where each rating has completely replaced the reality it refers to, and where the digital review feels more real, and certainly more meaningful, than the actual, real-life experience.

In facing the complexity and chaos of real life, algorithms help us find ways to simplify it; to take the awkwardness out of social interaction and the insecurity that comes with opinions and real-life feedback, and make it all fit neatly into a ratings box.

But as we adopt programming language, code, and algorithms as part of our own thinking, are human nature and artificial intelligence merging into one? We’re used to think of AI as an external force, something we have little control over. What if the most immediate threat of AI is less about robots taking over the world, and more about technology becoming more embedded into our consciousness and subjectivity?

In the same way that smartphones became extensions of our senses and our bodies, as Marshall McLuhan might say, algorithms are essentially becoming extensions of our thoughts. But what do we do when when they replace the very qualities that make us human?

And, as Lainier asks, “As computers mediate human language more and more over time, will language itself start to change?”

Image: antoniokhr/iStock

3. Automating Language: Keywords and Buzzwords

Google indexes search results based on keywords. SEO makes websites rise to the top of search results, based on specific tactics. To achieve this, we work around the algorithm, figure out what makes it tick, and sprinkle websites with keywords that make it more likely to stand out in Google’s eyes.

But much like Google’s algorithm, our mind prioritizes information based on keywords, repetition, and quick cues.

It started as a strategy we built around technology, but it now seeps into everything we do–from the the way we write headlines to how we generate “engagement” with our tweets to how we express ourselves in business and everyday life.

Take the buzzword mania that dominates both the media landscape and the startup scene. A quick look at some of the top startups out there will show that the best way to capture people’s attention–and investors’ money–is to add “AI,” “crypto” or “blockchain” into your company manifesto.

Companies are being valuated based on what they’re signifying to the world through keywords. The buzzier the keywords in the pitch deck, the higher the chances a distracted investor will throw some money at it. Similarly, a headline that contains buzzwords is far more likely to be clicked on, so the buzzwords start outweighing the actual content. Clickbait being one symptom of that.

Where do we go from here?

Technology gives us clear patterns; online shopping offers simple ways to navigate an abundance of choice. Therefore there’s no need to think — we just operate under the assumption that algorithms know best. We don’t exactly understand how they work, and that’s because code is hidden: we can’t see it, the algorithm just magically presents results and solutions. As Ullman warns in Life in Code, “When we allow complexity to be hidden and handled for us, we should at least notice what we are giving up. We risk becoming users of components. . .[as we] work with mechanisms that we do not understand in crucial ways. This not-knowing is fine while everything works as expected. But when something breaks or goes wrong or needs fundamental change, what will we do except stand helpless in the face of our own creations?”

Cue fake news, misinformation, and social media targeting in the age of Trump.

Image courtesy of Intellectual Take Out.

So how do we encourage critical thinking, how do we spark more interest in programming, how do we bring back good old-fashioned debate and disagreement? What can we do to foster difference of opinion, let it thrive, and allow it to challenge our views?

When we operate within the bubble of distraction that technology creates around us, and when our social media feeds consist of people who think just like us, how can we expect social change? What ends up happening is we operate exactly as the algorithm intended us to. The alternative is questioning the status quo, analyzing the facts and arriving at our own conclusions. But no one has time for that. So we become cogs in the Facebook machine, more susceptible to propaganda, blissfully unaware of the algorithm at work–and of all the ways in which it has inserted itself into our thought processes.

As users of algorithms rather than programmers or architects of our own decisions, our own intelligence become artificial. It’s “program or be programmed” as Douglas Rushkoff would say. If we’ve learned anything from Cambridge Analytica and the 2016 U.S. elections, it’s that it is surprisingly easy to reverse-engineer public opinion, to influence outcomes, and to create a world where data, targeting, and bots lead to a false sense of consensus.

What’s even more disturbing is that the algorithms we trust so much–the ones that are deeply embedded in the fabric of our lives, driving our most personal choices–continue to hack into our thought processes, in increasingly bigger and more significant ways. And they will ultimately prevail in shaping the future of our society, unless we reclaim our role as programmers, rather than users of algorithms.


Source: Tech Crunch

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.


Source: Tech Crunch

Orbital ATK is launching a cargo rocket for the ISS early tomorrow morning

Orbital ATK is launching a cargo rocket for the ISS early tomorrow morning

Wake up in the wee hours of the morning on the East Coast, and you might get a chance to watch a rocket launch more than three tons of cargo for the International Space Station. Set for a 4:39 ET liftoff from NASA’s facility on Wallops Island, West Virginia, if the weather holds, this will be Orbital ATK’s ninth cargo delivery to the ISS.

The Antares launch, which is the company’s first since November, was initially scheduled for today, but was ultimately pushed back in favor of inspections and better weather. The ship will carry supplies, parts, gear and a trio of CubeSats (mini-satellites), designed for ISS science studies. CBS notes one particular quantum physics study that “will attempt to cool atoms to a billionth of a degree above absolute zero.”

If you’re already up that early on the East Coast and have a decent vantage point, look up. Things will start off small and build to something potentially spectacular. Space.com describes it as akin to a shooting star at first, building to something more like a comet, with the sun catching the rocket’s smoky trail around four and a half minutes after liftoff.

The launch is one of 11 planned under a NASA contract, with potential to add six more supply missions for the ISS. SpaceX, for its part, is currently under contract for 20 such missions. 


Source: Tech Crunch

After tens of thousands of pre-orders, 3D audio headphones startup Ossic disappears

After tens of thousands of pre-orders, 3D audio headphones startup Ossic disappears

After taking tens of thousands of crowd-funding pre-orders for a high-end pair of “3D sound” headphones, audio startup Ossic announced this weekend that it is shutting down the company and backers will not be receiving refunds.

The company raised $2.7 million on Kickstarter and $3.2 million on Indiegogo for their Ossic X headphones which they pitched as a pair of high-end head-tracking headphones that would be perfect for listening to 3D audio, especially in a VR environment. While the company also raised a “substantial seed investment,” in a letter on the Ossic website, the company blamed the slow adoption of virtual reality alongside their crowdfunding campaign stretch goals which bogged down their R&D team.

“This was obviously not our desired outcome. The team worked exceptionally hard and created a production-ready product that is a technological and performance breakthrough. To fail at the 5 yard-line is a tragedy. We are extremely sorry that we cannot deliver your product and want you to know that the team has done everything possible including investing our own savings and working without salary to exhaust all possibilities.”

We have reached out to the company for additional details.

Through January 2017, the San Diego company had received more than 22,000 pre-orders for their Ossic X headphones. This past January, Ossic announced that they had shipped out the first units to the 80 backers in their $999 developer tier headphones. In that same update, the company said they would enter “mass production” by late spring 2018.

In the end, after tens of thousands of pre-orders, Ossic only built 250 pairs of headphones and only shipped a few dozen to Kickstarter backers.

Crowdfunding campaign failures for hardware products are rarely shocking, but often the collapse comes from the company not being able to acquire additional funding from outside investors. Here, Ossic appears to have been misguided from the start and even with nearly $6 million in crowdfunding and seed funding, which they said nearly matched that number, they were left unable to begin large-scale manufacturing. The company said in their letter, that it would likely take more than $2 million in additional funding to deliver the existing backlog of pre-orders.

Backers are understandably quite upset about not receiving their headphones. A group of over 1,200 Facebook users have joined a recently-created page threatening a class action lawsuit against the team.


Source: Tech Crunch

AT&T launches its LTE-powered Amazon Dash-style button

AT&T launches its LTE-powered Amazon Dash-style button

When we first told you about AT&T’s LTE-M Button, the information was socked away in a deluge of AWS Re:Invent announcements. The telecom giant was a bit more upfront when announcing its availability earlier this week — but just a bit.

After all, it’s not a direct-to-consumer device. Unlike the product-branded hunk of plastic you can presently pick up from Amazon to refresh your supply of Goldfish crackers and Tide Pods, this one’s currently open to developers at companies looking to build their own. What it does have going for it, however, is LTE-M, a cheaper, lower cost version of 4G that’s set to power a future generation of IoT devices.

That means it can be used for your standard Dash-like activities — letting customers replenish items with a press — and it can also be implemented in some more interesting scenarios, out of the bounds of regular WiFi. AT&T offers up a couple of case uses, including customer feedback in public venues and use in places like construction sites where home/office Wifi isn’t an option.

Of course, without the direct retail feedback loop, it’s not really a Dash competitor — and besides, AWS is helping power the thing, so Amazon’s still getting a kickback here. Oh, and then there’s the price — the buttons start at $30 a piece, which amounts to a lot of Tide Pods. As such, we likely won’t see them take off too quickly, but they do provide an interesting usage as AT&T looks to LTE-M to push IoT outside of the home. 


Source: Tech Crunch

With at least $1.3 billion invested globally in 2018, VC funding for blockchain blows past 2017 totals

With at least .3 billion invested globally in 2018, VC funding for blockchain blows past 2017 totals

Although bitcoin and blockchain technology may not take up quite as much mental bandwidth for the general public as it did just a few months ago, companies in the space continue to rake in capital from investors.

One of the latest to do so is Circle, which recently announced a $110 million Series E round led by bitcoin mining hardware manufacturer Bitmain. Other participating investors include Tusk VenturesPantera CapitalIDG Capital PartnersGeneral CatalystAccel PartnersDigital Currency GroupBlockchain Capital and Breyer Capital.

This round vaults Circle into an exclusive club of crypto companies that are valued, in U.S. dollars, at $1 billion or more in their most recent venture capital round. According to Crunchbase data, Circle was valued at $2.9 billion pre-money, up from a $420 million pre-money valuation in its Series D round, which closed in May 2016. According to Crunchbase data, only Coinbase and Robinhood — a mobile-first stock-trading platform which recently made a big push into cryptocurrency trading — were in the crypto-unicorn club, which Circle has now joined.

But that’s not the only milestone for the world of venture-backed cryptocurrency and blockchain startups.

Back in February, Crunchbase News predicted that the amount of money raised in old-school venture capital rounds by blockchain and blockchain-adjacent startups in 2018 would surpass the amount raised in 2017. Well, it’s only May, and it looks like the prediction panned out.

In the chart below, you’ll find worldwide venture deal and dollar volume for blockchain and blockchain-adjacent companies. We purposely excluded ICOs, including those that had traditional VCs participate, and instead focused on venture deals: angel, seed, convertible notes, Series A, Series B and so on. The data displayed below is based on reported data in Crunchbase, which may be subject to reporting delays, and is, in some cases, incomplete.

A little more than five months into 2018, reported dollar volume invested in VC rounds raised by blockchain companies surpassed 2017’s totals. Not just that, the nearly $1.3 billion in global dollar volume is greater than the reported funding totals for the 18 months between July 1, 2016 and New Year’s Eve in 2017.

And although Circle’s Series E round certainly helped to bump up funding totals year-to-date, there were many other large funding rounds throughout 2018:

There were, of course, many other large rounds over the past five months. After all, we had to get to $1.3 billion somehow.

All of this is to say that investor interest in the blockchain space shows no immediate signs of slowing down, even as the price of bitcoin, ethereum and other cryptocurrencies hover at less than half of their all-time highs. Considering that regulators are still figuring out how to treat most crypto assets, massive price volatility and dubious real-world utility of the technology, it may surprise some that investors at the riskiest end of the risk capital pool invest as much as they do in blockchain.

Notes on methodology

Like in our February analysis, we first created a list of companies in Crunchbase’s bitcoin, ethereum, blockchaincryptocurrency and virtual currency categories. We added to this list any companies that use those keywords, as well as “digital currency,” “utility token” and “security token” that weren’t previously included in the above categories. After de-duplicating this list, we merged this set of companies with funding rounds data in Crunchbase.

Please note that for some entries in Crunchbase’s round data, the amount of capital raised isn’t known. And, as previously noted, Crunchbase’s data is subject to reporting delays, especially for seed-stage companies. Accordingly, actual funding totals are likely higher than reported here.


Source: Tech Crunch

Bail reform has a complex relationship with tech

Bail reform has a complex relationship with tech

On any given day in the United States, more than 450,000 people are behind bars awaiting their constitutionally mandated fair trial. None of them have been convicted of a crime — they’ve been accused of committing a crime, but no formal ruling of guilt or innocence has been made. That means these hundreds of thousands of people are incarcerated simply because they don’t have the financial means to post bail. 

Bail was originally designed to incentivize people to show up for their court dates, but it has since evolved into a system that separates the financially well-off from the poor. It requires arrested individuals to pay money in order to get out of jail while they await trial. For those who can’t afford bail, they wind up having to sit in jail, which means they may be at risk of missing rent payments, losing their jobs and failing to meet other responsibilities. 

Money bail is all too often a common condition to secure release from jail while a case is in progress. Cash bail systems result in leaving many people incarcerated, even though they haven’t been convicted of a crime. 

The cash bail system in the United States is one of the greatest injustices in the criminal justice system, ACLU Deputy National Political Director Udi Ofer tells TechCrunch. Bail reform, Ofer says, is a “key way to achieve” the goals of challenging racial disparities in the criminal justice system and ending mass incarceration. 

As we explored in “The other pipeline,” the criminal justice system in the United States is deeply rooted in racism and a history of oppression. Black and Latino people comprise about 1.5 million of the total 2.2 million people incarcerated in the U.S. adult correctional system, or 67 percent of the prison population, while making up just 37 percent of the total U.S. population, according to the Sentencing Project.

With a criminal justice system that disproportionately affects people of color, it’s no wonder why the cash bail system does the same. For one, people of color are 25 percent more likely than white people to be denied the option of bail, according to a pre-trial study by Dr. Traci Schlesinger. And for the black people who are given the option to pay bail, the amount is 35 percent higher on average than bail for white men, according to a 2010 study.

The national felony bail median is $10,000. For those who can’t afford it, they have to rely on bail bond agencies, which charge a non-refundable fee to pay the required bail amount on the person’s behalf. The bail bond companies, which are backed by insurance companies, collect between $1.4 billion and $2.4 billion a year, according to the ACLU and Color of Change.

And if bail bond companies are out of reach, those who are sitting in jail awaiting trial are more likely to be convicted of the crime they were charged with. The non-felony conviction rate rose from 50 percent to 92 percent for those jailed pre-trial, according to a study by the New York City Criminal Justice Agency. Along the way, leading up to the trial, some prosecutors incentivize people to plead guilty to the charges even if they’re innocent.

“It’s time to end our nation’s system of cash bail that lets the size of your wallet determine whether you are granted freedom or stay locked up in jail,” Ofer says. “Money should never decide a person’s freedom yet that’s exactly what happens every day in the United States.”

Pre-trial detention is also costly to local cities, counties and taxpayers. It costs about $38 million a day to keep these largely nonviolent people behind bars, according to the Pretrial Justice Institute. Annually, that comes out to about $14 billion to jail unconvicted people.

“The only people benefiting from bail is the for-profit bail industry,” Ofer said. “If we’re ever going to end mass incarceration in the United States, then we need to end cash bail.”

Bail reform is coming

Across the nation, bail reform has made its way into a handful of states. New Jersey’s bail reform law took effect last January; since then, its daily jail population has dropped 17.2 percent, and courts have imposed cash bail on just 33 defendants out of 33,400, according to the ACLU.

The ACLU itself is working on bail reform in 38 states, including California, where Ofer says he is optimistic reform will happen this year. Right now, a pre-trial release bill, Senate Bill 10, is up for consideration in the Assembly. The bill argues California should ensure people awaiting trial are not incarcerated simply because they can’t afford to pay bail. The bill also advocates for counties to establish pre-trial services agencies to better determine if people are fit to be released.

The bill, introduced by Senators Bob Hertzberg and others, is backed by the ACLU and Essie Justice Group, an Oakland-based organization that advocates for actual justice in the criminal justice system.

“Today we have a system that allows for people to be released pre-trial if they have enough money to afford their bail,” Essie Justice Group founder Gina Clayton tells TechCrunch. “Everyone else is required to sit inside of a cage without any way out.”

Essie Justice Group works mostly with and for women who have incarcerated loved ones. Often, the only way out for people is help from family or a plea deal, Clayton says.

“When we see people making the bail, we see that women are going into tremendous debt and are also beholden to an industry that has time and time again been cited and known to practice in quite an incredibly despicable way in terms of coercing and harassing their customers,” Clayton says. “When we think about who are the people who know about what’s going on with bail, it’s black and brown women in this country.”

For the past two years, Essie Justice Group held an action around Mother’s Day, with the goal of bailing moms out of jail or immigration detention. Last year’s action led to release of 30 women.

Photo via Essie Justice Group

Can tech help?

The short the answer is maybe. Earlier this month, Google banned ads for bail bonds services, which Clayton says is the largest step any corporation has taken on behalf of people who have loved ones in jail. But while tech can help in some ways, Clayton has some concerns with additional for-profit entities entering the criminal justice system.

“There are definitely tech solutions that I’m very against,” Clayton said, but declined to comment on which ones in particular. “I will say that my energy around this doesn’t come from an imagined place. I’m seeing it happen. One of the things we’re seeing is companies who are interested in bail reform because they see another opportunity to make money off of families. Like, ‘let this person out, but have them, at a cost, check in with people I hire to do this fancy but expensive drug testing three times a week, pay for an ankle shackle or bracelet and GPS monitoring.’ I think the companies that are making money off of those types of things are the ones we need to be wary of.”

There is, however, one for-profit company that immediately jumped to Clayton’s mind as being one doing actual good in the criminal justice space. That company is Uptrust, which provides text message reminders to people regarding court dates.

“I think that is a really great addition to the landscape,” Clayton says. “The reason I’m a proponent of theirs is because I understand their politics and I know what they won’t do, which is take it a step further or get involved with getting incentivized to add on bells and whistles that look less like freedom for people but more revenue for them.”

Uptrust, founded by Jacob Sills and Elijah Gwynm, aims to help people make their court dates. While the movies like to depict flight risks and people skipping town ahead of their court dates, failure to appear in court often comes down to a lack of transportation, work conflicts, not receiving a reminder, childcare or poor time management, Sills tells TechCrunch.

That’s where the idea came to humanize the system a bit more, by enabling public defenders to more easily connect with their clients. Uptrust is two-way in nature and reminds people on behalf of the public defender about court dates. Clients can also communicate any issues they may have about making it to court.

“If the public defender knows the client has an issue, they can usually get court moved,” Sills says. “But if they don’t have the information, they’re not going to lie on behalf of clients.”

Because public defenders don’t make much money, Uptrust doesn’t charge very much, Sills says.

“But they really care about the client and one of the things we saw with this was we needed to change the whole front end of the system to be less adversarial and more human,” Sills says.

In addition to text reminders, Uptrust enables public defenders to assist with other needs clients may have.

“A lot of stuff around bail reform is around risk assessment rather than need assessment,” Sills tells me. “But we saw a lot of these individuals have needs, like helps with rides, child care or reminders.”

Public defenders who are invested in the care of their clients can remind them via Uptrust to do things like ask for time off work or schedule child care.

For the end-user, the client, Uptrust is all text-based. For the public defenders, Uptrust offers a software solution that integrates into their case management systems.

Since launching in the summer of 2016 in California’s Contra Costa County, the court appearance rate improved from 80 percent to 95 percent, Sills says. To date, Uptrust has supported 20,000 people with a five percent FTA rate.

“As we improve product, if we can get [the FTA rate] down to 3 percent, you really can start taking that data and pushing forth major policy change,” Sills says.

Uptrust’s goal is to shift from risk assessment to needs assessment and ensure people are supported throughout their interactions with the criminal justice system.

“Our view is in terms of bail reform, we need to make sure there’s not a proliferation of things like ankle monitors and whatnot,” Sills says. “For us, success is really being a subcontractor to the community as well as working with the government. I think there’s a huge risk in bail reform as it relates to technology because people see it as a big business opportunity, If a company replaces the government, they may not have the community’s best interest in mind. So it’s important to keep in mind they have the community’s best interest in mind.”

Similar to Uptrust, a tech organization called Appolition works by operating within the confines of the system. Appolition, founded by Dr. Kortney Ryan Zieger, enables people to funnel their spare change into the National Bail Out fund. As of April, Appolition has facilitated over $130,000 to go toward bail relief. Ziegler was not available for comment for this story.

Promise, on the other hand, aims to provide an alternative to the cash bail system. In March, Promise raised a $3 million round led by First Round Capital with participation from from Jay-Z’s Roc Nation.

The idea is to offer counties and local governments an alternative approach to holding people behind bars simply because they can’t afford bail. With Promise, case managers can monitor compliance with court orders and better keep tabs on people via the app. GPS monitoring is also an option, albeit a controversial one.

Let’s say you get arrested and end up having a bail hearing. Instead of asking you to pay bail, the public defender could suggest a pre-trial release with Promise. From there, Promise would work with the public defender and your case manager to determine your care plan.

“It’s clear that our values are about keeping people out of jail,” Promise CEO Phaedra Ellis-Lamkins told me on an episode of CTRL+T. “Like, we’re running a company but we fundamentally believe that not just it’s more cost effective but that it’s the right thing to do.”

Instead of a county jail paying $190 per day per person, Ellis-Lamkins said, Promise charges some counties just $17 per person per day. In some cases, Promise charges even less per person.

It’s that for-profit model that worries Clayton.

“Whenever you bring in the for-profit ethos in a criminal justice space, I think we need to be careful,” Clayton says.

She didn’t explicitly call out any companies. In fact, she said she doesn’t feel ready to make a judgment on Promise just yet. But she has a general concern of tech solutions that “dazzle and distract system actors who we really need to hold accountable and see operate in more systemic, holistic ways.”

Solutions, Clayton says, look like social safety nets like hospitals and clinics instead of jails.

“If we want to really move ourselves away from this path we’ve been on,” Clayton says, “which is towards normalizing state control of people then we should be really careful that our system that once looked like slavery to Jim Crow to mass incarceration doesn’t then become tech surveillance of all people.”


Source: Tech Crunch

A simple solution to end the encryption debate

A simple solution to end the encryption debate

Criminals and terrorists, like millions of others, rely on smartphone encryption to protect the information on their mobile devices. But unlike most of us, the data on their phones could endanger lives and pose a great threat to national security.

The challenge for law enforcement, and for us as a society, is how to reconcile the advantages of gaining access to the plans of dangerous individuals with the cost of opening a door to the lives of everyone else. It is the modern manifestation of the age-old conflict between privacy versus security, playing out in our pockets and palms.

One-size-fits all technological solutions, like a manufacturer-built universal backdoor tool for smartphones, likely create more dangers than they prevent. While no solution will be perfect, the best ways to square data access with security concerns require a more nuanced approach that rely on non-technological procedures.

The FBI has increasingly pressed the case that criminals and terrorists use smartphone security measures to avoid detection and investigation, arguing for a technological, cryptographic solution to stop these bad actors from “going dark.” In fact, there are recent reports that the Executive Branch is engaged in discussions to compel manufacturers to build technological tools so law enforcement can read otherwise-encrypted data on smartphones.

But the FBI is also tasked with protecting our nation against cyber threats. Encryption has a critical role in protecting our digital systems against compromises by hackers and thieves. And of course, a centralized data access tool would be a prime target for hackers and criminals. As recent events prove – from the 2016 elections to the recent ransomware attack against government computers in Atlanta – the problem will likely only become worse. Anything that weakens our cyber defenses will only make it more challenging for authorities to balance these “dual mandates” of cybersecurity and law enforcement access.

There is also the problem of internal threats: when they have access to customer data, service providers themselves can misuse or sell it without permission. Once someone’s data is out of their control, they have very limited means to protect it against exploitation. The current, growing scandal around the data harvesting practices on social networking platforms illustrates this risk. Indeed, our company Symphony Communications, a strongly encrypted messaging platform, was formed in the wake of a data misuse scandal by a service provider in the financial services sector.

(Photo by Chip Somodevilla/Getty Images)

So how do we help law enforcement without making data privacy even thornier than it already is? A potential solution is through a non-technological method, sensitive to the needs of all parties involved, that can sometimes solve the tension between government access and data protection while preventing abuse by service providers.

Agreements between some of our clients and the New York State Department of Financial Services (“NYSDFS”), proved popular enough that FBI Director Wray recently pointed to them as a model of “responsible encryption” that solves the problem of “going dark” without compromising robust encryption critical to our nation’s business infrastructure.

The solution requires storage of encryption keys — the codes needed to decrypt data — with third party custodians. Those custodians would not keep these client’s encryption keys. Rather, they give the access tool to clients, and then clients can choose how to use it and to whom they wish to give access. A core component of strong digital security is that a service provider should not have access to client’s unencrypted data nor control over a client’s encryption keys.

The distinction is crucial. This solution is not technological, like backdoor access built by manufacturers or service providers, but a human solution built around customer control.  Such arrangements provide robust protection from criminals hacking the service, but they also prevent customer data harvesting by service providers.

Where clients choose their own custodians, they may subject those custodians to their own, rigorous security requirements. The clients can even split their encryption keys into multiple pieces distributed over different third parties, so that no one custodian can access a client’s data without the cooperation of the others.

This solution protects against hacking and espionage while safeguarding against the misuse of customer content by the service provider. But it is not a model that supports service provider or manufacturer built back doors; our approach keeps the encryption key control in clients’ hands, not ours or the government’s.

A custodial mechanism that utilizes customer-selected third parties is not the answer to every part of the cybersecurity and privacy dilemma. Indeed, it is hard to imagine that this dilemma will submit to a single solution, especially a purely technological one. Our experience shows that reasonable, effective solutions can exist. Technological features are core to such solutions, but just as critical are non-technological considerations. Advancing purely technical answers – no matter how inventive – without working through the checks, balances and risks of implementation would be a mistake.


Source: Tech Crunch