This article was written by Allen Lu, the CTO & co-founder at Adaptilab, which aims to accelerate companies’ machine learning and talent acquisition with AI-driven sourcing.
In the past few years I’ve had the pleasure of working at two of the largest tech companies in the world, Microsoft and Google. Both companies use machine learning in a large number of their products, and I was fortunate enough to work on projects at both companies involving ML.
Here are some of the insights that I gained from my experiences.
Learn top tools and techniques that FAANG recruiters are looking for.
Become a Machine Learning Engineer
At Google, I worked on improving Search through machine learning. I used TensorFlow to create the machine learning models and then used Pandas to analyze data trends and plot comparisons of interesting features.
The entire development process, from gathering and processing data to making real-time predictions, was built like a well-oiled machine. Each individual part of the process flowed seamlessly into the next, and I was surprised at how modular and organized the engineering was.
Something else that surprised me was the relative simplicity of the machine learning models I was creating. Going into the project, I expected to be using a ton of advanced knowledge that I had learned from personal research and reading publications.
However, apart from the actual research department at Google, basically every other machine learning project involved using TensorFlow to create a standard neural network architecture.
I thought to myself that even if someone didn’t have any ML experience, they could probably do most industry-level machine learning tasks with just a few months of appropriate training.
When I first joined Google, most of my work revolved around classic machine learning models — building pipelines with TensorFlow, managing data with Pandas, and deploying models into production. That was the state of the world back then.
Fast-forward to today, and the landscape has completely transformed. Large Language Models (LLMs) are now at the center of nearly every machine learning effort at Google. The Gemini family of models powers features across products like Search (AI Overviews), Gmail, Docs, and even Android. As an ML engineer, you’re just as likely to be fine-tuning a model for retrieval-augmented generation (RAG) or evaluating its responses for safety as you are to be training a classifier.
And TensorFlow isn’t the only tool in town anymore. PyTorch dominates research and much of the applied ML world, while tools like Vertex AI give developers a unified platform for deploying, serving, and monitoring everything from traditional ML to multimodal LLMs. The tooling is more powerful — and the expectations are higher — than ever before.
At Microsoft, I worked in the specific subdivision of the company focused on CS and AI research, known as Microsoft Research (MSR). MSR is pretty different from other divisions of Microsoft, since the people working at MSR are almost exclusively researchers with PhDs. Therefore, a lot of the work done in MSR is incredibly theoretical, from cryptography to advanced AI algorithms.
I worked on a project that focused around speech and natural language processing. Because of this, I got to know some of the other ML researchers and observed what they were working on. Similar to Google, most of the researchers used frameworks like TensorFlow to model their neural networks and deep learning algorithms.
Even some of the most complicated and innovative ML models could be built relatively easily using these frameworks. In fact, there are thousands of GitHub projects dedicated to replicating model’s described in leading research publications.
From my experiences, I realized that machine learning in the tech industry is pretty different from how people perceive it. Most people view it as an impenetrable field, where only the select few with years of research experience can hope to enter.
However, with the rise of popularity in the field and new frameworks to make coding models a piece of cake, there is more opportunity than ever to work on machine learning in industry.
At Microsoft, machine learning isn’t confined to research labs anymore — it’s deeply embedded into every product. The company’s flagship Copilot initiative integrates AI into Word, Excel, PowerPoint, Windows, and even GitHub, transforming how people work. On the infrastructure side, Azure OpenAI Service and Azure AI Studio have become the go-to platforms for deploying both OpenAI models (like GPT-4.1 and GPT-4o) and custom fine-tuned models.
Working on ML here means more than just building models — it’s about designing entire systems around them, from prompt orchestration and vector search to evaluation and continuous improvement. Even roles that were once considered “pure ML” now require knowledge of RAG pipelines, embeddings, vector databases, and safety evaluations. ML at Microsoft today is about building end-to-end AI products — not just models.
Back in the early days, TensorFlow was the de facto standard for deep learning. But over the past few years, PyTorch has taken the lead thanks to its intuitive syntax, dynamic computation graphs, and thriving open-source ecosystem. Most research projects and even many production systems now use PyTorch as their primary framework.
That doesn’t mean TensorFlow is obsolete — it still powers many enterprise-scale pipelines, especially inside Google — but ML engineers today are expected to be fluent in both. On top of that, frameworks like JAX (for accelerated research) and Hugging Face Transformers (for LLM development) have become essential parts of the modern ML toolkit.
In 2019, getting a model into production mostly meant training it, deploying it behind an API, and moving on. In 2026, that’s just the starting point. Production ML now includes:
Experiment tracking and versioning with tools like MLflow or Weights & Biases.
Continuous evaluation — including accuracy, latency, fairness, and (for LLMs) response quality and safety.
Prompt engineering and orchestration for GenAI applications.
RAG pipelines that combine retrieval, embeddings, and LLM inference.
Monitoring and governance to detect model drift and ensure regulatory compliance.
Whether you’re at a tech giant or a startup, this holistic skill set — often called MLOps or LLMOps — is now essential for real-world machine learning.
The machine learning job description has changed. While foundational skills like statistics, linear algebra, and classic ML algorithms still matter, employers now look for a more holistic skill set that bridges data, modeling, deployment, and product thinking. Here’s what’s in demand:
LLMs and prompt design: Knowing how to fine-tune, evaluate, and integrate large models.
Retrieval-Augmented Generation (RAG): Building systems that combine search and generation.
Vector databases: Working with tools like Pinecone, Weaviate, or pgvector.
Safety and evaluation: Designing evaluation pipelines for bias, hallucinations, and factuality.
Cloud platforms: Using services like Vertex AI or Azure AI to deploy and scale applications.
Machine learning in 2026 is about more than models — it’s about building complete, responsible, and impactful AI systems.
ML isn’t as exclusive as it sounds. Due to high demand, anyone with an interest can get started and land a job in less than a year.
You can begin your machine learning journey with our Path, Become a Machine Learning Engineer , which focuses on the concepts you’d actually use in an ML industry, rather than the theory.
These courses progresses through these need-to-know concepts:
Happy learning!