Microsoft and a16z put aside their differences and join forces to speak out against AI regulation- BC

Microsoft and a16z put aside their differences and join forces to speak out against AI regulation– BC

Two of the most important forces in two deeply intertwined technology ecosystems – large established companies and startups – have taken a break from counting their money to They jointly allege that the government cease and desist from even considering regulations that may affect their financial interests or, as they like to call it, innovation.

“Our two companies may not agree on everything, but it’s not about our differences,” writes this group of very disparate perspectives and interests: a16z founding partners Marc Andreessen and Ben Horowitz and the CEO of Microsoft, Satya Nadella, and President and Chief Legal Officer, Brad. Blacksmith. A truly intersectional set, representing both big business and big money.

But it’s the little ones they’re supposedly taking care of. That is, all the companies that would have been affected by the latest attempt at regulatory overreach: SB 1047.

Imagine being charged for improper disclosure of an open model! Anjney Midha, General Partner at a16z called him a “regressive tax” on startups and “blatant regulatory capture” by big tech companies who, unlike Midha and her impoverished colleagues, could afford the lawyers needed to comply.

Except that was all disinformation promulgated by Andreessen Horowitz and the other wealthy interests who might actually have been affected as backers of multi-billion dollar companies. In fact, small models and startups would have been only trivially affected because the proposed law specifically protected them.

It is strange that the same kind of intended cuts to “Little Tech” that Horowitz and Andreessen routinely advocate were distorted and downplayed by the lobbying campaign they and others waged against SB 1047. (The architect of that bill, California State Senator Scott Wiener talked about this whole issue recently on Disrupt.)

That bill had its problems, but its opposition vastly exaggerated the cost of compliance and failed to significantly support claims that it would chill or overburden startups.

It’s part of the established playbook that Big Tech companies, with which Andreessen and Horowitz are closely aligned, despite their stances, operate at the state level where they can win (like with SB 1047), while calling for federal solutions they know they will. They will never arrive. , or that they will have no force due to partisan disputes and the ineptitude of Congress on technical issues.

This recently released joint statement on “political opportunity” is the last part of the piece: after torpedoing SB 1047, they can say they only did it to support a federal policy. Never mind that we’re still waiting for the federal privacy law that tech companies have pushed for a decade while fighting state bills.

And what policies do they support? “A variety of responsible market-based approaches.” In other words: Don’t touch our money, Uncle Sam.

Regulations should have “a science and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology” and should “focus on the risk of bad actors misusing technology.” AI,” write the powerful venture capitalists and executives at Microsoft. What this means is that we should not have proactive regulation but rather reactive punishments when criminals use unregulated products for criminal purposes.

This approach worked really well for that whole FTX situation, so I can see why they take it.

“Regulation should only be implemented if its benefits outweigh its costs,” they also write. It would take thousands of words to unpack all the ways this idea, expressed in this context, is hilarious. But basically what they are suggesting is that the fox be part of the chicken coop planning committee.

Regulators should “allow developers and startups the flexibility to choose which AI models to use wherever they are building solutions and not tilt the playing field to benefit a given platform,” they collectively add. The implication is that there is some kind of plan to require permission to use one model or another. Since that is not the case, this is a straw man.

Here’s an important one that I have to quote in full:

The right to learn.– Copyright law is designed to promote the progress of science and useful arts by extending protection to publishers and authors to encourage them to bring new works and knowledge to the public, but not at the expense of the public’s right to learn of these works. Copyright law should not be co-opted to mean that machines should be prevented from using data (the basis of AI) to learn in the same way as people. Unprotected knowledge and facts, regardless of whether they are contained in protected subject matter, must remain free and accessible.

To be clear, the explicit claim here is that software, run by multi-billion dollar corporations, has the “right” to access any data because it should be able to learn from it “the same way people do.”

First of all, no. These systems are not like people; They produce data that mimics human production in their training data. They are complex statistical projection software with a natural language interface. They have no more “right” to any document or fact than Excel.

Second, this idea that “facts” (by which they mean “intellectual property”) are the only thing these systems care about and that some kind of data-hoarding cabal is working to prevent them is a narrative designed that we have seen before. Perplexity has invoked the “facts belong to everyone” argument in its public response to the lawsuit over alleged systematic content theft, and its CEO, Aravind Srinivas, repeated the fallacy to me on the Disrupt stage, as if Perplexity were being sued for knowing trivialities like distance. from the Earth to the Moon.

While this is not the place to embark on a full description of this particular straw man argument, let me simply point out that while the facts In fact, they are free agents, the way they are created (for example, through original reports and scientific research) involves real costs. This is why copyright and patent systems exist: not to prevent intellectual property from being widely shared and used, but to incentivize its creation by ensuring that real value can be assigned to it.

Copyright law is far from perfect and is probably abused as much as it is used. But it is not being “co-opted to imply that machines should be prevented from using data.” It is being applied to ensure that bad actors do not circumvent the value systems we have built around intellectual property.

That is clearly the ask: let the systems we own, manage, and benefit from freely use the valuable output of others without compensation. To be fair, that part is “the same as humans,” because it’s humans who design, direct, and deploy these systems, and those humans don’t want to pay for anything they don’t have to pay for and don’t need. I don’t want regulations to change that.

There are many other recommendations in this small policy document, which no doubt receive greater detail in the versions they have sent directly to legislators and regulators through official lobbying channels.

Some ideas are undoubtedly good, if also a little selfish: “fund digital literacy programs that help people understand how to use artificial intelligence tools to create and access information.” Good! Of course, the authors have invested a lot in those tools. Support “Open Data Commons: accessible data sets that would be managed in the public interest.” Excellent! “Examine your procurement practices to enable more startups to sell technology to the government.” Awesome!

But these more general, positive recommendations are the kind of things you see every year in the industry: investing in public resources and speeding up government processes. These acceptable but inconsequential suggestions are just a vehicle for the more important ones I described above.

Ben Horowitz, Brad Smith, Marc Andreessen and Satya Nadella want the government to stop regulating this lucrative new development, let the industry decide which regulations are worth it, and override copyrights in a way that acts more or less as a general rule. sorry for illegal or unethical practices that many suspect allowed the rapid rise of AI. Those are the policies that matter to them, whether or not children become digitally literate.

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top