looming threat to human developers in the future?

The major technological brands seem to have committed themselves to revolutionizing the software engineering sector. Microsoft has taken the first step with GitHub and Copilot, its code block suggestion artificial intelligence. Google apparently wants to go even further with a secret project that seeks to create code that can write, correct and update itself. The initiative builds on advances in artificial intelligence. It revives the debate about the possible disappearance of human computer scientists in the future.

As reported by third parties familiar with the developments at the moment internal Google, the initiative was born under the name of Pitchfork and was renamed AI Developer Assistance. It is part of Google’s bets in the field of generative artificial intelligence.

The details of how this revolutionary tool works remain a mystery. However, some that have come to light paint a very interesting picture of what to expect from this project. Pitchfork, or AI Developer Assistance, is itself a tool that uses machine learning to teach code to write and rewrite itself. How ? By learning corresponding styles of programming languages, and applying this knowledge to write new lines of code.

The original intention behind this project was to create a platform capable of automatically updating the Python code base each time a new version was released, without requiring the intervention or hiring of a large number of engineers.

However, the potential of the program turned out to be much greater than expected. From now on, the intention is to give life to a general-purpose system able to maintain a quality standard in the code, but without depending on the human intervention in the tasks of development and update.

Google officials need to fix several issues before showing it to the public. Beyond the technical aspects that still need to be covered, the legal plan and the ethical plan will not be outdone. Indeed, the Californian firm was at the center of the mid-year scene in the case of the engineer who was fired for saying that LaMDA, its artificial intelligence model for natural language conversations, had shown signs of sensitivity similar to that of a human.

The Pitchfork initiative revives the debate on the future disappearance of developers. In fact, when we talk about artificial intelligence, two main currents of thought clash: that of third parties who think that it is a tool, nothing more, and that of stakeholders and observers who believe that it is only a matter of time before it becomes a threat to the human race. Feedback on the debate is accumulating and some suggest that general artificial intelligence could fall on us in 5 to 10 years.

The machines would then be endowed with common sense. At the stage of general artificial intelligence, they would be capable of causal reflection, that is to say the ability to reason about why things happen. Initiatives like Pitchfork would then be in a prime position to cause human computer scientists to be put in the garage.

And you?

Do current developments in the software engineering pipeline give rise to legitimate concerns about the future of human computer scientists in the pipeline?

What does the possibility of research leading to general artificial intelligence suggest to you in 5 to 10 years?

How do you see artificial intelligence in 5 10 years? As a tool or as a danger for your job as a developer?

See as well :

Is autonomous driving today just a futuristic vision at Tesla Motors? The company has just changed the objectives of its Autopilot

SEC asks Musk to step down as Tesla chairman, demands US$40 million fine for out-of-court settlement

Tesla announces that the new computer for fully autonomous driving of its vehicles is in production and will prove itself this month

Tesla shares fall after its autopilot system is involved in a crash and reports of its vehicle batteries catching fire

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *