How IBM is using A.I. for marketing

IBM is hard at work trying to disentangle the concepts behind artificial intelligence (AI) to clients, explaining to them how the technology makes decisions. Eighty-two percent of C-suite executives it researched said they wanted to use AI but were concerned about unconscious bias and the skills needed. There are several ways marketers can best use AI, Peluso said. AI is also being used in how businesses interact with their customers, allowing chat bots to answer customer service queries, for ex


IBM is hard at work trying to disentangle the concepts behind artificial intelligence (AI) to clients, explaining to them how the technology makes decisions. Eighty-two percent of C-suite executives it researched said they wanted to use AI but were concerned about unconscious bias and the skills needed. There are several ways marketers can best use AI, Peluso said. AI is also being used in how businesses interact with their customers, allowing chat bots to answer customer service queries, for ex
How IBM is using A.I. for marketing Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-12-21  Authors: lucy handley
Keywords: news, cnbc, companies, ai, used, ibm, bias, marketing, using, sure, peluso, youre, media, customers


How IBM is using A.I. for marketing

IBM is hard at work trying to disentangle the concepts behind artificial intelligence (AI) to clients, explaining to them how the technology makes decisions.

Eighty-two percent of C-suite executives it researched said they wanted to use AI but were concerned about unconscious bias and the skills needed. It’s offering AI for a range of services and has implemented it itself in areas such as recruitment where it’s used to make sure there is no bias in how job descriptions are written, according to IBM Senior Vice President and Chief Marketing Officer Michelle Peluso.

“Technology can help to make sure there’s not bias in promotions and the like and so (there is) this grounded belief at IBM that inclusion is part of our ‘brand state’,” she told CNBC’s “Marketing Media Money.”

There are several ways marketers can best use AI, Peluso said. The first is in getting to know customers. “It allows us to understand more about our customers. We can analyze tone. We can listen in on chat bots, we can analyze personality and social (media), so we have the ability to develop a richer understanding of our customers,” she said.

AI is also being used in how businesses interact with their customers, allowing chat bots to answer customer service queries, for example. The nature of advertising — where traditionally messages are broadcast to people one-way — could also become more of an interaction. “We can say in a digital ad (for example) what’s in your refrigerator … And (it will) give you a great recipe, or (AI can) tell us why you’re interested in a certain car. And we’ll tailor the content live to make sure you’re getting the answer, so it will change (so the advertising is) actually interacting … with customers,” Peluso said.


Company: cnbc, Activity: cnbc, Date: 2018-12-21  Authors: lucy handley
Keywords: news, cnbc, companies, ai, used, ibm, bias, marketing, using, sure, peluso, youre, media, customers


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

Not enough people are asking if artificial intelligence should be built in the first place

One is about artificial intelligence — the golden promise and hard sell of these companies. has a problem: Bias. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. In accepting the existing narratives about A.I., vast zones of contest and imagination are relinquished. Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power, and data.


One is about artificial intelligence — the golden promise and hard sell of these companies. has a problem: Bias. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. In accepting the existing narratives about A.I., vast zones of contest and imagination are relinquished. Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power, and data.
Not enough people are asking if artificial intelligence should be built in the first place Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-12-14  Authors: julia powles, helen nissenbaum, getty images
Keywords: news, cnbc, companies, asking, ai, system, problem, artificial, bias, built, societal, vast, technology, intelligence, companies, place, data, systems


Not enough people are asking if artificial intelligence should be built in the first place

This story originally ran on Medium on December 7, 2018.

The rise of Apple, Amazon, Alphabet, Microsoft and Facebook as the world’s most valuable companies has been accompanied by two linked narratives about technology. One is about artificial intelligence — the golden promise and hard sell of these companies. A.I. is presented as a potent, pervasive, unstoppable force to solve our biggest problems, even though it’s essentially just about finding patterns in vast quantities of data. The second story is that A.I. has a problem: Bias.

The tales of bias are legion: Online ads that show men higher-paying jobs; delivery services that skip poor neighborhoods; facial recognition systems that fail people of color; recruitment tools that invisibly filter out women. A problematic self-righteousness surrounds these reports: Through quantification, of course we see the world we already inhabit. Yet each time, there is a sense of shock and awe and a detachment from affected communities in the discovery that systems driven by data about our world replicate and amplify racial, gender, and class inequality.

Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law and the media as well.

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

What has been remarkably underappreciated is the key interdependence of the twin stories of A.I. inevitability and A.I. bias. Against the corporate projection of an otherwise sunny horizon of unstoppable A.I. integration, recognizing and acknowledging bias can be seen as a strategic concession — one that subdues the scale of the challenge. Bias, like job losses and safety hazards, becomes part of the grand bargain of innovation.

The reality that bias is primarily a social problem and cannot be fully solved technically becomes a strength, rather than a weakness, for the inevitability narrative. It flips the script. It absorbs and regularizes the classification practices and underlying systems of inequality perpetuated by automation, allowing relative increases in “fairness” to be claimed as victories — even if all that is being done is to slice, dice and redistribute the makeup of those negatively affected by actuarial decision-making.

In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?

In accepting the existing narratives about A.I., vast zones of contest and imagination are relinquished. What is achieved is resignation — the normalization of massive data capture, a one-way transfer to technology companies, and the application of automated, predictive solutions to each and every societal problem.

Given this broader political and economic context, it should not surprise us that many prominent voices sounding the alarm on bias do so with blessing and support from the likes of Facebook, Microsoft, Alphabet, Amazon and Apple. These convenient critics spotlight important questions, but they also suck attention from longer-term challenges. The endgame is always to “fix” A.I. systems, never to use a different system or no system at all.

Once we recognize the inherently compromised nature of the A.I. bias debate, it reveals opportunities deserving of sustained policy attention. The first has to be the wholesale giveaway of societal data that undergirds A.I. system development. We are well overdue for a radical reappraisal over who controls the vast troves of data currently locked down by technology incumbents. Our governors and communities should act decisively to disincentivize and devalue data hoarding with creative policies, including carefully defined bans, levies, mandated data sharing, and community benefit policies, all backed up by the brass knuckles of the law. Smarter data policies would reenergize competition and innovation, both of which have unquestionably slowed with the concentrated market power of the tech giants. The greatest opportunities will flow to those who act most boldly.

The second great opportunity is to wrestle with fundamental existential questions and to build robust processes for resolving them. Which systems really deserve to be built? Which problems most need to be tackled? Who is best placed to build them? And who decides? We need genuine accountability mechanisms, external to companies and accessible to populations. Any A.I. system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest. And there must always be the possibility to stop the use of automated systems with appreciable societal costs, just as there is with every other kind of technology.

Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power, and data. In service of these forces, we are being spun potent stories that drive toward widespread reliance on regressive, surveillance-based classification systems that enlist us all in an unprecedented societal experiment from which it is difficult to return. Now, more than ever, we need a robust, bold, imaginative response.

Julia Powles is a Research Fellow in the Information Law Institute at New York University and a 2018 Poynter Fellow at Yale University.

Subscribe to CNBC on YouTube.


Company: cnbc, Activity: cnbc, Date: 2018-12-14  Authors: julia powles, helen nissenbaum, getty images
Keywords: news, cnbc, companies, asking, ai, system, problem, artificial, bias, built, societal, vast, technology, intelligence, companies, place, data, systems


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

Google CEO Sundar Pichai testifies before Congress on bias, privacy

One of the first specific questions about Google’s plans in China came from Rep. Sheila Jackson Lee (D-Tx.) “Right now, we have no plans to launch search in China,” Pichai answered, adding that access to information is “an important human right.” At the hearing, Pichai said that more than 160 million people had checked their Google privacy settings in the last month, but that Google wanted to make it even easier for “average users” to control their data. One of the explicit focuses of the hearin


One of the first specific questions about Google’s plans in China came from Rep. Sheila Jackson Lee (D-Tx.) “Right now, we have no plans to launch search in China,” Pichai answered, adding that access to information is “an important human right.” At the hearing, Pichai said that more than 160 million people had checked their Google privacy settings in the last month, but that Google wanted to make it even easier for “average users” to control their data. One of the explicit focuses of the hearin
Google CEO Sundar Pichai testifies before Congress on bias, privacy Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-12-11  Authors: jillian donfro
Keywords: news, cnbc, companies, ceo, china, bias, pichai, congress, plans, testifies, search, privacy, results, sundar, hearing, google, data, googles


Google CEO Sundar Pichai testifies before Congress on bias, privacy

It’s Sundar Pichai’s turn in the congressional hot seat.

Google’s CEO is testifying before the House Judiciary Committee on Tuesday where lawmakers are grilling him on a wide range of issues, including potential political bias on its platforms, its plans for a censored search app in China and its privacy practices.

This is the first time Pichai has appeared before Congress since Google declined to send him or Alphabet CEO Larry Page to a hearing on foreign election meddling earlier this year. That slight sparked anger among senators who portrayed Google as trying to skirt scrutiny.

The hearing culminates a tough year for big tech companies, as lawmakers and the public have become increasingly skeptical about Silicon Valley’s effects on democracy, misinformation and privacy. Tuesday’s proceedings have tested the soft-spoken executive’s ability to remain cool and confident while defending Google in the face intense questioning.

In their opening remarks, Representatives Kevin MCCarthy (R-Ca.) and Bob Goodlatte (R-Va.) outlined how they hoped the hearing would focus on Google’s bias against conservative content, handling of misinformation and hate speech, data privacy, and plans for a censored search app in China.

In response, Pichai’s prepared remarks emphasized Google’s patriotism and focus on user privacy.

One of the first specific questions about Google’s plans in China came from Rep. Sheila Jackson Lee (D-Tx.) who expressed concern that Google would aid in the oppression of Chinese people “looking for a lifeline of freedom and democracy.”

“Right now, we have no plans to launch search in China,” Pichai answered, adding that access to information is “an important human right.”

Pichai’s has said in the past that Google is “not close” to launching a censored search result in China, though Tuesday’s comments appear to further distance the company from those efforts. The Intercept reported in September that at one point Google employees working on the “Project Dragonfly” efforts were told to get it in “launch-ready state” to roll out upon approval from Beijing officials.

Pichai would not, however, go so far as to commit not to launch “a tool for surveillance and censorship in China,” as he was asked to do by Rep. David Cicilline (D-RI).

“We always think it’s in our duty to explore possibilities to give users access to information,” Pichai said.

A handful of representatives also asked Pichai about how transparent (or not) Google is when it comes to its data collection practices. The company came under fire earlier this year after The Associated Press revealed that contrary to what a user might reasonably assume, pausing “Location History” tracking on a Google account didn’t actually stop the search giant from storing time-stamped location data. Google ended up clarifying the language of its policy.

At the hearing, Pichai said that more than 160 million people had checked their Google privacy settings in the last month, but that Google wanted to make it even easier for “average users” to control their data.

“We always think that there is more to do,” Pichai said. “It’s an ongoing area of effort.”

In response to a later question about the General Data Protection Regulation (GDPR) that came into effect in the European Union earlier this year, Pichai said that there was “some value for companies to have consistent global regulation,” and highlighted how Google published its own framework to guide data privacy legislation earlier this year.

One of the explicit focuses of the hearing was whether or not Google’s search results were biased against conservative points of view. Multiple representatives posed questions on this topic, to which Pichai repeatedly responded that Google’s search algorithms did not favor any particular ideology, but instead surfaced the most relevant results, which could be affected by the time of a users’ search, as well as their geography.

One particularly fiery take against that line of questioning came from Rep. Ted Lieu (D-Ca.) who said that the queries on conservative bias “wasted time” given that private, profit-seeking companies like Google are protected by the First Amendment. Even if Google was biased, he said, that would be its right. However, he also used sample Google searches to show that Google would turn up positive search results about Republicans and negative search results about Democrats.

“If you want positive searches, do positive things,” Lieu said. “If you get bad press, don’t blame Google. Consider blaming yourself.”

Lieu has made similar points at past hearings that included Facebook, Twitter, and Alphabet.


Company: cnbc, Activity: cnbc, Date: 2018-12-11  Authors: jillian donfro
Keywords: news, cnbc, companies, ceo, china, bias, pichai, congress, plans, testifies, search, privacy, results, sundar, hearing, google, data, googles


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

Fearful of bias, Google blocks gender-based pronouns from new AI tool

Google’s technology will not suggest gender-based pronouns because the risk is too high that its “Smart Compose” technology might predict someone’s sex or gender identity incorrectly and offend users, product leaders revealed to Reuters in interviews. Gender is a “a big, big thing” to get wrong. Getting Smart Compose right could be good for business. Gmail has 1.5 billion users, and Lambert said Smart Compose assists on 11 percent of messages worldwide sent from Gmail.com, where the feature firs


Google’s technology will not suggest gender-based pronouns because the risk is too high that its “Smart Compose” technology might predict someone’s sex or gender identity incorrectly and offend users, product leaders revealed to Reuters in interviews. Gender is a “a big, big thing” to get wrong. Getting Smart Compose right could be good for business. Gmail has 1.5 billion users, and Lambert said Smart Compose assists on 11 percent of messages worldwide sent from Gmail.com, where the feature firs
Fearful of bias, Google blocks gender-based pronouns from new AI tool Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-11-27  Authors: sopa images, contributor, getty images
Keywords: news, cnbc, companies, gender, blocks, compose, lambert, pronouns, technology, google, product, fearful, bias, smart, users, ai, genderbased, big, tool


Fearful of bias, Google blocks gender-based pronouns from new AI tool

Google’s technology will not suggest gender-based pronouns because the risk is too high that its “Smart Compose” technology might predict someone’s sex or gender identity incorrectly and offend users, product leaders revealed to Reuters in interviews.

Gmail product manager Paul Lambert said a company research scientist discovered the problem in January when he typed “I am meeting an investor next week,” and Smart Compose suggested a possible follow-up question: “Do you want to meet him?” instead of “her.”

Consumers have become accustomed to embarrassing gaffes from autocorrect on smartphones. But Google refused to take chances at a time when gender issues are reshaping politics and society, and critics are scrutinizing potential biases in artificial intelligence like never before.

“Not all ‘screw ups’ are equal,” Lambert said. Gender is a “a big, big thing” to get wrong.

Getting Smart Compose right could be good for business. Demonstrating that Google understands the nuances of AI better than competitors is part of the company’s strategy to build affinity for its brand and attract customers to its AI-powered cloud computing tools, advertising services and hardware.

Gmail has 1.5 billion users, and Lambert said Smart Compose assists on 11 percent of messages worldwide sent from Gmail.com, where the feature first launched.

Smart Compose is an example of what AI developers call natural language generation (NLG), in which computers learn to write sentences by studying patterns and relationships between words in literature, emails and web pages.


Company: cnbc, Activity: cnbc, Date: 2018-11-27  Authors: sopa images, contributor, getty images
Keywords: news, cnbc, companies, gender, blocks, compose, lambert, pronouns, technology, google, product, fearful, bias, smart, users, ai, genderbased, big, tool


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

Race doesn’t impact how job-seekers negotiate salaries—but it does affect how much money they get

But regardless of how much you prepare, new research suggests that if you’re black, racial bias can lessen the offer you end up receiving. In 2016 the Pew Research Center reported that college-educated black men earn 20 percent less than college-educated white men. Participants who demonstrated racial bias, the researchers found, expected black job-seekers would negotiate less than white job seekers. “Racially-biased job evaluators consistently overestimated the number of offers and counteroffer


But regardless of how much you prepare, new research suggests that if you’re black, racial bias can lessen the offer you end up receiving. In 2016 the Pew Research Center reported that college-educated black men earn 20 percent less than college-educated white men. Participants who demonstrated racial bias, the researchers found, expected black job-seekers would negotiate less than white job seekers. “Racially-biased job evaluators consistently overestimated the number of offers and counteroffer
Race doesn’t impact how job-seekers negotiate salaries—but it does affect how much money they get Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-11-13  Authors: yoni blumberg, getty images, -derek avery, professor, wake forest university school of business
Keywords: news, cnbc, companies, does, race, racial, collegeeducated, evaluators, money, bias, jobseekers, doesnt, black, affect, job, impact, salariesbut, white, negotiate, participants


Race doesn't impact how job-seekers negotiate salaries—but it does affect how much money they get

Conventional wisdom holds that you should negotiate your salary when you apply for a job, since that usually won’t hurt your chances of landing an offer as long as you remain likable, and pocketing a few extra thousand dollars every year can add up as you get older. And when it comes to negotiating there’s no shortage of advice on what works and what doesn’t. You should know what you’re worth, for instance, and be ready to justify why you’re asking for more.

But regardless of how much you prepare, new research suggests that if you’re black, racial bias can lessen the offer you end up receiving.

“Racially-biased job evaluators see black job-seekers as less deserving of higher monetary awards and take issue when the black job seekers ask for more,” Morela Hernandez, an associate professor at the University of Virginia, tells CNBC Make It.

In the paper “Bargaining While Black,” Hernandez and her colleagues suggest this bias may help explain the significant racial wage gap in the U.S. In 2016 the Pew Research Center reported that college-educated black men earn 20 percent less than college-educated white men. That’s the difference between making $25 and $32 per hour.

Meanwhile, college-educated black women earn 8 percent less than college-educated white women. When you don’t account for education, the gap becomes even more significant.

The researchers identified the salary negotiation process as a potential contributor to this trend through a series of experiments. In the first, study participants completed a survey to determine their own racial bias. Then they looked at resumes and headshots to estimate the likelihood that hypothetical job seekers would negotiate their salaries. Participants who demonstrated racial bias, the researchers found, expected black job-seekers would negotiate less than white job seekers.

Then participants were randomly assigned to be either hiring evaluators or job candidates and, in one-on-one scenarios, negotiated for a salary between $82,000 to $90,000. Race had no real effect on how much the candidates negotiated, but some of the participants incorrectly thought it did.

“Racially-biased job evaluators consistently overestimated the number of offers and counteroffers black job seekers made,” says Hernandez. “This underlines how our brains can see something that isn’t in fact there by virtue of the lens we use to interpret the situation.”

Biased evaluators expected black job-seekers to negotiate less than the white job-seekers. Yet once the researchers put it to the test, those evaluators thought the black job-seekers actually negotiated more. As a result, they were “less willing to make concessions.”

In other words, because their expectations were violated, they gave the black candidates less money.


Company: cnbc, Activity: cnbc, Date: 2018-11-13  Authors: yoni blumberg, getty images, -derek avery, professor, wake forest university school of business
Keywords: news, cnbc, companies, does, race, racial, collegeeducated, evaluators, money, bias, jobseekers, doesnt, black, affect, job, impact, salariesbut, white, negotiate, participants


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

Dollar range-bound as investors await Fed rate decision; yen trades with weak bias

The dollar traded in a narrow range on Thursday as markets settled after U.S. midterm election results came in as expected, leaving investors free to focus on a Federal Reserve’s policy decision later in the global day. “The dollar is likely to benefit as we still expect the Fed to maintain its hawkish stance. The dollar strengthened 0.14 versus the yen to trade at 113.66 on Wednesday. The euro traded at $1.1429 on Thursday. The New Zealand dollar traded flat at $0.6776, with little reaction to


The dollar traded in a narrow range on Thursday as markets settled after U.S. midterm election results came in as expected, leaving investors free to focus on a Federal Reserve’s policy decision later in the global day. “The dollar is likely to benefit as we still expect the Fed to maintain its hawkish stance. The dollar strengthened 0.14 versus the yen to trade at 113.66 on Wednesday. The euro traded at $1.1429 on Thursday. The New Zealand dollar traded flat at $0.6776, with little reaction to
Dollar range-bound as investors await Fed rate decision; yen trades with weak bias Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-11-08
Keywords: news, cnbc, companies, dollar, euro, traded, rates, bias, currency, versus, fed, decision, weak, bank, trades, rangebound, rate, investors, yen, trade


Dollar range-bound as investors await Fed rate decision; yen trades with weak bias

The dollar traded in a narrow range on Thursday as markets settled after U.S. midterm election results came in as expected, leaving investors free to focus on a Federal Reserve’s policy decision later in the global day.

The central bank’s Federal Open Market Committee (FOMC) is expected to maintain the hawkish language seen in recent policy statements, while keeping interest rates unchanged this time.

The Fed has raised rates three times this year as the U.S. economy boomed and inflation started to pick up, and it has signaled a rate rise in December, with two more hikes by mid-2019.

“The dollar is likely to benefit as we still expect the Fed to maintain its hawkish stance. The U.S. economy needs rising rates as wage pressures are building and there is a risk of an overheating of the economy,” said Sim Moh Siong, currency strategist at Bank of Singapore.

The prospect of further Fed tightening helped the dollar recover against the euro and yen, having lost ground after the mid-term elections resulted in a split Congress, with Democrats winning control of the House of Representatives and Republicans cementing their majority in the Senate.

Expectations that the Washington will descend into gridlock has reduced President Donald Trump’s chances of pushing through a fiscal stimulus package.

The dollar index, a gauge of its value versus six major peers traded at 96.22 on Thursday, gaining 0.23 percent.

The dollar strengthened 0.14 versus the yen to trade at 113.66 on Wednesday. The dollar has gained around 1.9 percent over the Japanese currency over the last nine trading sessions due to the diverging monetary policies of the U.S. Fed and the Bank of Japan (BoJ).

While the Fed is on track to raise interest rates the Bank of Japan will press on with ultra loose monetary policy because of low growth and inflation.

The widening interest rate differential between U.S. and Japanese bonds has made the dollar a more attractive bet than the yen, which is often a funding currency for carry trades.

The euro traded at $1.1429 on Thursday. The single currency had touched an intra-day high of $1.15 on Wednesday, due to dollar weakness rather than any substantial improvement in the euro zone’s economic fundamentals.

The standoff between the EU and Rome over Italy’s budget deficit and concerns over Europe’s slowing economic growth have handicapped the euro, which has lost 4 percent versus the dollar over the last six months.

Elsewhere in the currency market, the pound traded flat at $1.3124 in early Asian trade after gaining 3.36 percent versus the dollar in the last six trading sessions, as traders bet a Brexit agreement was close.

The New Zealand dollar traded flat at $0.6776, with little reaction to its central bank keeping rates on hold at 1.75 percent on Thursday.

The Australian dollar built on its gains of the previous three trading sessions versus the greenback to trade at $0.7283, to gain 0.1 percent versus. The Aussie was cheered by stronger than expected trade data out of China, its largest trade partner.


Company: cnbc, Activity: cnbc, Date: 2018-11-08
Keywords: news, cnbc, companies, dollar, euro, traded, rates, bias, currency, versus, fed, decision, weak, bank, trades, rangebound, rate, investors, yen, trade


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

Why Silicon Valley can’t shake accusations of anticonservative bias

No matter what they decide, someone will accuse them of bias. That is why they are desperate to transfer the responsibility (and legal liabilities) of making these decisions to someone else. Until we do, this controversy is here to stay, because these companies are the new masters of public information. And the gatekeepers are now media monopolists that likes of which would turn Citizen Kane green with envy. We are going to need a new digital social contract that guarantees our rights in this ma


No matter what they decide, someone will accuse them of bias. That is why they are desperate to transfer the responsibility (and legal liabilities) of making these decisions to someone else. Until we do, this controversy is here to stay, because these companies are the new masters of public information. And the gatekeepers are now media monopolists that likes of which would turn Citizen Kane green with envy. We are going to need a new digital social contract that guarantees our rights in this ma
Why Silicon Valley can’t shake accusations of anticonservative bias Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-10-17  Authors: dipayan ghosh, ben scott, co-authors of, digital deceit ii, a policy agenda to fight disinformation on the int, jim watson, afp, getty images
Keywords: news, cnbc, companies, valley, decide, public, party, companies, need, going, valuable, bias, world, shake, cant, anticonservative, information, silicon, accusations, dont


Why Silicon Valley can't shake accusations of anticonservative bias

“If we are going to have the most valuable companies in the history of the world decide how all of our news and information is sorted and delivered to us, we are going to need radical transparency.”

The companies make a clear and obvious counterargument. They are not in the business of making value judgments. It’s simply not in their commercial interests to do so. They don’t want to be the “arbiters of truth”; they don’t want to determine what constitutes nudity or profanity and what does not; and they don’t want to determine whether certain novel forms of extreme content deserve to be taken offline or not. No matter what they decide, someone will accuse them of bias. That is why they are desperate to transfer the responsibility (and legal liabilities) of making these decisions to someone else. They want to act upon the policies set forth by a third party, and they don’t care who that third party might be — whether government or civil society or industry organization — so long as the public thinks that third party is credible and so long as the regulations they set are favorable, meaning the rules favor the industry’s desires to innovate, even if that innovation comes at the expense of some public interest.

In the end this won’t work. Because the tech companies do decide. They are both publishers and they are technology platforms. Every day, they sort political information and deliver it to billions of people. And we do not know the rationale for those choices. Until we do, this controversy is here to stay, because these companies are the new masters of public information. While we’ve never had a perfect system of news production and distribution (far from it), we have always had a pretty clear understanding of how it came to us, who decided, and why. And now we don’t. And the gatekeepers are now media monopolists that likes of which would turn Citizen Kane green with envy.

The answer to the problem of #stopthebias is to pull back the curtain on the digital media marketplace. If we are going to have the most valuable companies in the history of the world decide how all of our news and information is sorted and delivered to us, we are going to need radical transparency. We are going to need a new digital social contract that guarantees our rights in this market.


Company: cnbc, Activity: cnbc, Date: 2018-10-17  Authors: dipayan ghosh, ben scott, co-authors of, digital deceit ii, a policy agenda to fight disinformation on the int, jim watson, afp, getty images
Keywords: news, cnbc, companies, valley, decide, public, party, companies, need, going, valuable, bias, world, shake, cant, anticonservative, information, silicon, accusations, dont


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

White House said to prepare antitrust probe order of tech companies

A draft of that executive order, seen by Bloomberg, is in its preliminary stages and hasn’t yet been run past other government agencies, a White House official told the publication. Its current language would direct federal agencies to give recommendations ways to “protect competition among online platforms and address online platform bias” within a month after being signed, according to the report. However, the White House distanced itself from Bloomberg’s report in a statement to CNBC. “Althou


A draft of that executive order, seen by Bloomberg, is in its preliminary stages and hasn’t yet been run past other government agencies, a White House official told the publication. Its current language would direct federal agencies to give recommendations ways to “protect competition among online platforms and address online platform bias” within a month after being signed, according to the report. However, the White House distanced itself from Bloomberg’s report in a statement to CNBC. “Althou
White House said to prepare antitrust probe order of tech companies Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-09-22  Authors: kevin breuninger, alex wong, getty images
Keywords: news, cnbc, companies, house, bias, companies, antitrust, tech, told, white, accusations, agencies, order, probe, twitter, social, prepare, online, platforms


White House said to prepare antitrust probe order of tech companies

The White House is reportedly working on a memorandum for President Donald Trump to sign that would direct government agencies to “thoroughly investigate” big tech companies like Google and Facebook, Bloomberg News reported on Saturday, who have fended off accusations of political bias against conservatives.

A draft of that executive order, seen by Bloomberg, is in its preliminary stages and hasn’t yet been run past other government agencies, a White House official told the publication. It also does not mention any specific companies.

Its current language would direct federal agencies to give recommendations ways to “protect competition among online platforms and address online platform bias” within a month after being signed, according to the report.

However, the White House distanced itself from Bloomberg’s report in a statement to CNBC. Aides told The Washington Post on Saturday they didn’t know where the memo came from. They also cast doubt on whether it had been vetted through normal policy channels.

“Although the White House is concerned about the conduct of online platforms and their impact on society, this document is not the result of an official White House policymaking process,” deputy White House press secretary Lindsay Walters told CNBC in an emailed statement.

Business Insider also published the full leaked document Saturday. The text instructs the government agencies to “promote competition and ensure that no online platform exercises market power in a way that harms consumers, including through the exercise of bias.”

Republican lawmakers and right-wing groups have long questioned whether social media giants like Twitter, Facebook and Google are guilty of an anti-conservative bias, and promoting Democratic or progressive political views.

Trump himself has levied those accusations repeatedly, which reached a crescendo when Twitter was hit by accusations of “shadow banning” right-leaning voices on its platform.

Facebook CEO Mark Zuckerberg, Twitter chief Jack Dorsey and most recently Google’s Sundar Pichai have denied that their platforms are politically biased.

State attorneys general are set to brief U.S. Attorney General Jeff Sessions on Sept. 25 about their existing investigations into social media companies’ practices.

Bloomberg’s full report can be found on its website.


Company: cnbc, Activity: cnbc, Date: 2018-09-22  Authors: kevin breuninger, alex wong, getty images
Keywords: news, cnbc, companies, house, bias, companies, antitrust, tech, told, white, accusations, agencies, order, probe, twitter, social, prepare, online, platforms


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

A.I. has a bias problem that needs to be fixed: World Economic Forum

We need to make the AI industry more diverse in the west: AI expert 5 Hours Ago | 02:40Artificial intelligence has a bias problem and the way to fix it is by making the tech industry in the West “much more diverse”, according to the head of AI and machine learning at the World Economic Forum. Just two to three years ago, there were very few people raising ethical questions around the use of AI, Kay Firth-Butterfield told CNBC at the World Economic Forum’s Annual Meeting of the New Champions in T


We need to make the AI industry more diverse in the west: AI expert 5 Hours Ago | 02:40Artificial intelligence has a bias problem and the way to fix it is by making the tech industry in the West “much more diverse”, according to the head of AI and machine learning at the World Economic Forum. Just two to three years ago, there were very few people raising ethical questions around the use of AI, Kay Firth-Butterfield told CNBC at the World Economic Forum’s Annual Meeting of the New Champions in T
A.I. has a bias problem that needs to be fixed: World Economic Forum Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-09-18  Authors: saheli roy choudhury
Keywords: news, cnbc, companies, ai, west, recognition, world, ethical, needs, way, fixed, forum, bias, things, economic, problem, ago


A.I. has a bias problem that needs to be fixed: World Economic Forum

We need to make the AI industry more diverse in the west: AI expert 5 Hours Ago | 02:40

Artificial intelligence has a bias problem and the way to fix it is by making the tech industry in the West “much more diverse”, according to the head of AI and machine learning at the World Economic Forum.

Just two to three years ago, there were very few people raising ethical questions around the use of AI, Kay Firth-Butterfield told CNBC at the World Economic Forum’s Annual Meeting of the New Champions in Tianjin, China.

But ethical questions have now “come to the fore,” she said. “That’s partly because we have (the General Data Protection Regulation), obviously, in Europe, thinking about privacy, and also because there have been some obvious problems with some of the AI algorithms.”

Theoretically, machines are supposed to be unbiased. But there have been instances in recent years that showed even algorithms can be prejudiced.

A few years ago, Google was criticized after its image recognition algorithm identified African Americans as “gorillas.” Earlier this year, a Wired report said that Google has yet to fix the issue, and simply blocked its image recognition software from recognizing gorillas altogether.

“As we’ve seen more and more of these things crop up, then the ethical debate around artificial intelligence has become much greater,” Firth-Butterfield said. “One of the things that we’re trying to do at the World Economic Forum is really find a way of ensuring that AI grows exponentially, as it is doing for the benefit of humanity, whilst mitigate some of these ethical considerations in privacy, bias, transparency and accountability.”

Experts have said that biases sometimes creep in on programs because human bias influenced those algorithms when they were being written.

Firth-Butterfield agreed.


Company: cnbc, Activity: cnbc, Date: 2018-09-18  Authors: saheli roy choudhury
Keywords: news, cnbc, companies, ai, west, recognition, world, ethical, needs, way, fixed, forum, bias, things, economic, problem, ago


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post

Putin takes another swipe at protectionism, ‘sanctions, bans and political bias’

President Vladimir Putin appeared to take another thinly veiled swipe at Trump’s economic policies on Wednesday, a day after Russia and China vowed to stand together to fight protectionism. “The world and global economy are coming up against new forms of protectionism today with different kinds of barriers which are increasing,” Russian President Vladimir Putin told a plenary session at the Eastern Economic Forum (EEF) in Vladivostok, Russia. “Basic principles of trade — competition and mutual e


President Vladimir Putin appeared to take another thinly veiled swipe at Trump’s economic policies on Wednesday, a day after Russia and China vowed to stand together to fight protectionism. “The world and global economy are coming up against new forms of protectionism today with different kinds of barriers which are increasing,” Russian President Vladimir Putin told a plenary session at the Eastern Economic Forum (EEF) in Vladivostok, Russia. “Basic principles of trade — competition and mutual e
Putin takes another swipe at protectionism, ‘sanctions, bans and political bias’ Cached Page below :
Company: cnbc, Activity: cnbc, Date: 2018-09-12  Authors: holly ellyatt, mikhail svetlov, getty images news, getty images
Keywords: news, cnbc, companies, protectionism, vladimir, sanctions, putin, political, economy, swipe, global, bans, economic, world, veiled, president, bias, takes, vladivostok, vowed


Putin takes another swipe at protectionism, 'sanctions, bans and political bias'

President Vladimir Putin appeared to take another thinly veiled swipe at Trump’s economic policies on Wednesday, a day after Russia and China vowed to stand together to fight protectionism.

“The world and global economy are coming up against new forms of protectionism today with different kinds of barriers which are increasing,” Russian President Vladimir Putin told a plenary session at the Eastern Economic Forum (EEF) in Vladivostok, Russia.

“Basic principles of trade — competition and mutual economic benefit — are depreciated and unfortunately undermined, they’re becoming hostages of ideological and fleeting political situations, in that we see a serious challenge for all of the global economy, especially for the dynamically-growing Asia-Pacific and its leadership,” he added.


Company: cnbc, Activity: cnbc, Date: 2018-09-12  Authors: holly ellyatt, mikhail svetlov, getty images news, getty images
Keywords: news, cnbc, companies, protectionism, vladimir, sanctions, putin, political, economy, swipe, global, bans, economic, world, veiled, president, bias, takes, vladivostok, vowed


Home Forums

    • Forum
    • Topics
    • Posts
    • Last Post