Article

AI coding assistants in the embedded domain

Philip Ling
person holding AI info in hands
AI is bringing down barriers to coding.

KEY TAKEAWAYS:
  • Embedded software developers are evaluating AI assistants.
  • IDE plug-ins provide easy access to LLMs.
  • Visual Studio Code is emerging as the leader.

Vibe coding may have a bad reputation among serious software developers, but there’s no denying that AI-assisted coding has arrived. For people just looking for a quick and simple way to develop a tool for a job, vibe coding is real. For professional software developers, AI assistants are on the brink of changing the way we work, forever.

Many developers are already using GitHub Copilot to access large language models (LLMs) for software development. This service is available directly on the GitHub site; if you have a GitHub account and are logged in, you may notice repository pages now include a Copilot icon (top right). Hitting this will open a chat box, allowing you to ask Copilot about the files in the repository.

GitHub Copilot is a cloud service with a common application programming interface (API) and model layer. It means developers can also access the service inside integrated development environments (IDEs) via plug-ins/extensions. There are four ways developers may use GitHub Copilot in an IDE:

  • Ask: Ask the service something about the project you are working on (this can also be a mode of operation on the GitHub site).
  • Edit: Upload or point to files you want Copilot to edit for you.
  • Agent: Let it access your PC’s shell to look at, edit and test your code while you approve each action.
  • Plan: Get help with the high-level architecture of your project.

Adding AI coding assistance to your IDE

The way engineers integrate AI into workflows is developing fast. The big advantage here is that you gain access to LLMs from providers including Google, OpenAI and Anthropic through the same IDE plug-in.

The plug-in gives developers access to the AI industry’s latest LLMs running as cloud services hosted by GitHub. When your IDE extension makes a call to an LLM, it is passed to GitHub’s Copilot backend and fielded out to the AI models you’ve chosen (or paid) to use.

Embedded software engineers have a choice when it comes to IDEs, but the flexibility of Visual Studio Code has created a trend toward a slightly less integrated approach to using these tools. VS Code can be used as a complete IDE, but it is also common to use it as a front-end to other commercial tools, such as compilers offered by IAR or Keil.

VS Code was the original first-party Copilot extension; GitHub brought the plug-in to developers in June 2021. Since then, GitHub has released more first-party extensions for other IDEs.

Eclipse is the open-source framework for building IDEs and has been available since 2001. Eclipse was immediately popular with semiconductor vendors looking for a platform they could brand. It reduced the cost and complexity of developing and maintaining proprietary IDEs while still allowing developers to leverage third-party plug-ins.

In late 2023 a company called Genuitec offered early access to its own plug-in: Copilot4Eclipse. Copilot4Eclipse is described as a third-party Copilot plug-in, having not been developed by GitHub. It provides the same features as the first-party GitHub extension. The table below provides an overview of how GitHub Copilot accesses LLMs for use in code generation.

Table 1: A shortlist of features offered by GitHub Copilot

Area

Inline completions

Assistance provided by GitHub Copilot

Inline suggestions as you type in VS Code; supports multi line and whole function completions with keyboard accept/dismiss controls.

Natural language to code

Generates code from comments or chat prompts; you can ask for a function or block and insert/refine the proposed implementation.

Chat driven generation

Copilot Chat can create new methods, classes or files based on prompts and project context, then propose edits to apply in the editor.

Slash commands / smart actions

Provides commands like /tests, /fix, /explain, /doc, and others to generate tests, fixes, explanations and refactors for selected code.

Project scale / agentic behavior

Adds agents and @workspace support to plan multi step changes, edit multiple files, scaffold components/services and perform broader project wide generation.

Accessing GitHub Copilot through an IDE delivers valuable features. The user experience varies for Eclipse-based IDEs, while VS Code provides a common interface.

Are developers favoring VS Code?

Whether you use an Eclipse-based vendor-specific IDE or VS Code with vendor plug-ins, the way you access most AI coding assistants will be based on calls to the same core technology: GitHub Copilot. But the features that plug-in offers will vary. Today, the VS Code plug-in is seen as having the edge over Copilot4Eclipse. One reason is it offers greater functionality, no doubt a result of its ownership and longer time in market.

There is a visible trend among device vendors toward greater support for VS Code. With widespread recognition that it offers better integration of GitHub Copilot’s functionality and features into the IDE. Leaders in the microcontroller space, including NXP, Renesas, Infineon and STMicroelectronics, offer both Eclipse-based IDEs and extensions for VS Code, so developers can choose how they want to work with those companies’ products when developing.

While the strengths of a first-party plug-in may influence device makers’ future tool strategies, there is currently no indication that support for Eclipse will be removed. But when AI assistance is proven to increase productivity and becomes implicit in embedded software development, it will be the engineers who ultimately decide.

Software development has turned a corner

Using AI to assist you develop software is not the same as vibe coding. Judgement and experience are imperative if you want to get the best out of these tools. It requires knowledge and a solid understanding when asking the agent to do something and assessing the quality of what it delivers.

Generating code is one thing, but correctly interpreting your intentions is probably more difficult. These are two distinct stages, and generative AI is getting better at both.

In our next article we will dive deeper with a first-hand experience of using AI coding assistants.

About Author

Philip Ling
Philip Ling, Technical Content Manager, Corporate Marketing

Philip Ling is a Technical Content Manager with Avnet. He holds a post-graduate diploma in Advanced ...

Marketing Content Spots

Related Articles

Related Articles
Industrial automation using analog sensor data for AI systems
Optimized analog front-end design for edge AI
By Avnet Staff   -   November 21, 2025
Connected applications are in transition, from the internet of things (IoT) to the artificial intelligent internet of things (AIoT). Real-world data is now doing double-duty. How does that impact analog design?
AI processor power supply module representation
Low-latency power management in edge AI architectures
By Avnet Staff   -   November 6, 2025
Can traditional power management techniques meet AI's computational demands while satisfying the stringent latency requirements of real-time systems? We look at solutions to this fundamental challenge for edge AI developers.

Related Events

Related Events
What, Why & How of Vision AI at the Edge
Date: April 23, 2021
Location: On Demand

ai-coding-assistants-in-the-embedded-domain