• The FRAME Dispatch
  • Posts
  • Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up

Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up

From mislabeled downtime to misunderstood bottlenecks, this issue explores why accurate data starts with systems, language, and habits on the floor.

Many manufacturers say they want to be data-driven. But few understand what it really takes to make data usable and trustworthy.

This issue looks at why so many data projects fall short. We start by exploring the hidden challenges behind collecting and using plant floor data. From legacy equipment to unclear ownership, the obstacles are often more practical than technical.

We then shift to the floor itself. You will learn how to spot bottlenecks, where bad metrics take root, and how label games with planned and unplanned downtime can distort entire improvement efforts.

To close, we share a career guide for those entering the world of controls. With the right focus and tools, you can make meaningful progress and build a foundation that lasts.

The Hidden Cost of Getting Usable Data

Collecting data in a manufacturing setting is far more complex than most people admit.

In a recent set of conversations with potential customers, a clear pattern emerged. They understood the value of data. They were excited about what it could unlock in terms of operational improvements. But they had little awareness of how much effort it takes to get that data to a state where it is actually usable. Not just collected or visualized, but structured and reliable enough to support decisions, power analytics, or drive automation.

That is the focus of this insight. I want to walk through why turning raw machine data into something that generates real return on investment remains so difficult for many manufacturers. I also want to connect this to technologies I keep discussing with clients and partners, including control systems, edge infrastructure, industrial protocols, and cloud platforms.

Most facilities are not starting from a blank slate. Instead, they have machines from multiple vendors, built during different eras, each controlled by different systems. These were often tied together by integrators with limited documentation and later maintained by in-house teams with varying skill sets. In most cases, data is already being generated and visualized through local SCADA systems. But getting beyond the SCADA screen into usable data flows is where the friction begins.

Figure 1 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | The Real Bottlenecks in Your Data Strategy

Figure 1 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | The Real Bottlenecks in Your Data Strategy

Challenge 1: Obsolete Equipment

This continues to be the most common and frustrating roadblock. Many teams will say, "We have been collecting data from these machines for years. Why is it a problem now?" The problem is not whether the data exists. The problem is in how difficult it is to access it in a meaningful and scalable way.

First, older equipment often requires costly and time-consuming workarounds. Specialized hardware and rare expertise are needed to extract signals cleanly and safely. Second, many original equipment manufacturers no longer support key components. This leaves internal teams without guidance when issues arise. Third, connecting legacy systems to modern networks creates serious cybersecurity concerns. These risks need to be addressed before any meaningful data movement can occur.

Challenge 2: Reverse Engineering Machines

It is rare to find two identical control systems in the field. Despite all the talk about standardization, most machines are configured differently. Different versions, different applications, and different integrators or end-users lead to dramatically different programming.

For example, one filler may be installed at a small facility in Italy packaging olive oil. Another may be deployed in Florida producing orange juice at scale. Even if both machines were built by the same manufacturer, their setup and integration will differ significantly.

Before any data can be used, each machine and line needs to be reverse engineered. This process takes time and resources. It also often lacks reusability. Incoming teams do not trust prior work and typically start from scratch. As a result, even well-documented systems are re-analyzed, line by line, adding cost and delay to every project.

Challenge 3: Plant Architecture Gaps

Many facilities simply lack the digital foundation needed to support robust data strategies. I regularly meet manufacturers who still operate on unmanaged switches, inconsistent naming conventions, fragmented networks, and outdated server hardware.

Even when edge devices or data loggers are installed, the overall architecture is often an afterthought. There is no clear source of truth, no namespace strategy, and no visibility into the full picture of what is connected where. These gaps make any scalable data project much more difficult and expensive to maintain.

Challenge 4: Protocol Fragmentation

Industrial protocols are not plug-and-play. Many facilities have a mix of Ethernet/IP, Modbus, OPC DA, Profinet, and proprietary OEM variants. These protocols do not communicate natively with each other, and the effort to translate or normalize data streams is rarely accounted for early in a project. Without protocol awareness at the planning stage, teams often discover late that data loss, mismatched timestamps, or unreliable polling are holding back their entire analytics stack. Bridging these systems requires thoughtful design, not just more hardware.

Challenge 5: Organizational Disconnects

The biggest barrier is often not technical. It is organizational.

Data initiatives are launched by leadership teams who expect insight. But the effort to collect, contextualize, and maintain data accuracy falls on engineering teams who are already stretched thin. These teams are often not consulted in the early stages, leading to gaps in expectations, budgeting, and implementation timelines.

I have seen projects stall for months simply because no one defined ownership for system integration or data modeling. Even worse, data is collected and stored but never used because it was not aligned with the original operational goals.

Conclusion: Usable Data Requires More Than Infrastructure

When people say “just get the data,” they overlook the full cost of getting it right.

Data is not useful just because it is available. It is useful when it is accurate, contextual, trustworthy, and delivered in the right format to the right people at the right time. That takes more than a few sensors or dashboards. It takes architecture. It takes standardization. And it takes collaboration across roles that rarely sit at the same table.

For manufacturers looking to modernize, this is where the real work begins. Not at the cloud. Not on the screen. But on the floor, in the systems, and in the conversations that shape how your data actually flows.

How to Identify and Validate Bottlenecks

One of the most important concepts on the plant floor is the bottleneck.

Bottlenecks are the machines, work cells, or processes that restrict the output of an entire production line. They are the limiting factor, the constraint that defines how fast or how effectively a process can run. If you remove or improve a bottleneck, you unlock immediate throughput gains. If you ignore it, no amount of improvement elsewhere will make a difference.

The good news is that most plant managers and engineers already know the theory. The challenge is putting it into practice.

So how do you identify and validate where the bottleneck actually is?

Figure 2 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | The Three-Lens Bottleneck Diagnostic

Figure 2 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | The Three-Lens Bottleneck Diagnostic

Step 1: Observe the Process in Motion

There is no substitute for time spent on the floor. Stand and watch the process from raw materials to finished goods. In some environments, it is obvious. One piece of equipment causes all the backups. You see queues building up in front of it. Operators are waiting. Downstream assets are starved.

But in many facilities, the signal is buried in the noise. The constraint may move over time. It may vary between shifts or change based on product type. That is why direct observation needs to be paired with process understanding, conversations with operators, and basic data review.

Step 2: Understand the Full Process

You do not need to be a chemical engineer to understand a chemical process. But you do need to know how materials flow, what the control points are, and where rework, waste, or waiting tends to happen.

This takes time. Not documentation. Not just SCADA tags. Time.

Ask questions. Whiteboard it. Learn how raw materials are staged, how changeovers affect timing, and where inventory typically builds up. These conversations will give you a map of where to focus. And that map is often different from the official process flow diagram.

For a deeper breakdown of how to walk through processes and align them with system data, see:
🔗 Guide to Manufacturing Line Assessments
🔗 Foundations of Process Control and Sequencing

Step 3: Talk to the Operators

If you are serious about finding bottlenecks, talk to the people who run the equipment every day.

Engineers will often focus on faults, logic, or alarms. Operators notice different things. They know when something "feels off." They know which machine they always have to babysit. They know which tasks break rhythm and which ones flow smoothly. Their observations are grounded in repetition, not theory.

A useful analogy is driving a car. You may not know what is wrong with your vehicle, but you know when it is running poorly. It pulls to one side. It is making a weird noise. It just does not feel right. That is the same kind of insight operators bring. They cannot always explain the root cause, but they can point you to the right part of the process.

If you are building a list of process issues or root cause targets, operator feedback should always be your starting point.

Step 4: Validate with Data

You do not need a full-blown digital twin to analyze bottlenecks. But you do need some basic data to support or challenge your observations.

Look at runtime data, downtime logs, counts, rejects, or manual cycle entries. Even basic metrics like idle time, starved time, or fault frequency can point you to where the constraint is most likely sitting.

A well-placed sensor or counter can clarify what an hour on the floor may not. And if you have a historian or SCADA system in place, look at event sequences or tag trends over time. One of the easiest ways to confirm a bottleneck is to track how long material waits at a specific station versus others.

For examples of how to structure plant data for these kinds of insights, check out:
🔗 Designing Smarter PLC Data Models
🔗 SCADA and MES Integration Fundamentals

Planned Downtime vs Unplanned Downtime

I remember this story as if it happened yesterday.

At one of the plants I worked with, I noticed a senior manager editing downtime events in the MES system. Curious, I asked him what he was doing. Without hesitation, he told me he was relabeling events. Specifically, he was changing unplanned downtime into planned downtime.

This immediately raised a red flag.

In most manufacturing environments, performance metrics drive everything. They influence bonuses, reviews, and investment decisions. Plant-level KPIs are not just numbers. They are loaded with consequences. So I was surprised to see a senior leader manually adjusting production data that had been automatically collected by the system.

His reasoning? Downtime labeled as "planned" does not impact OEE, while "unplanned" does. That was enough to justify the change.

Figure 3 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | Planned vs Unplanned Downtime: The KPI That Breaks Itself

Figure 3 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | Planned vs Unplanned Downtime: The KPI That Breaks Itself

Why This Matters

This kind of behavior is not rare. In fact, according to a report by LNS Research, fewer than half of manufacturers consistently define and categorize downtime events across teams and systems. This lack of structure causes confusion, manipulation, and missed opportunities for real improvement.

Unplanned downtime reflects failures and surprises. These are the moments where something went wrong and operations stopped. Planned downtime, on the other hand, includes maintenance, cleanups, or changeovers that were known and scheduled. The two categories should be fundamentally different.

When those definitions are unclear or left to individual judgment, the integrity of the data collapses. More importantly, the improvement process stalls. Instead of using the data to drive action, teams spend time trying to control the narrative.

What Actually Happens on the Floor

In many facilities, what counts as planned or unplanned shifts from team to team. One shift might log a water leak as unplanned. The next shift, under pressure to hit targets, might recategorize it as a "planned inspection." A scheduled lunch break might be extended but not tracked. A machine jam might be blamed on a supplier issue, even though it should have triggered a maintenance response.

This inconsistency builds over time. Eventually, no one really knows what the numbers mean. As a result, leadership may launch the wrong improvement projects, overlook real reliability issues, or mistakenly believe performance is improving.

In some plants, I have seen entire lines get praised for hitting OEE targets, while everyone on the floor knows that dozens of unplanned stops were relabeled to protect the dashboard.

What Good Looks Like

Strong operations teams take this seriously. They define their categories clearly and enforce them consistently. A few guiding principles help:

  • Create Standard Definitions
    Everyone in the plant should know what qualifies as planned downtime and what does not. These definitions should be visible, documented, and part of onboarding and daily routines.

  • Build the Definitions into the System
    Use MES or historian platforms to automate tagging based on work orders, scheduled events, or shift change times. Eliminate ambiguity where possible.

  • Lock Categories Where It Makes Sense
    Limit who can override downtime categories. If adjustments are needed, route them through a review process, not manual edits.

  • Audit and Review Monthly
    Compare logs across shifts. Look for patterns. If downtime definitions are drifting, correct them quickly and use the opportunity to train or clarify.

A Final Thought

Metrics are powerful only when they are trusted. Once teams start managing the numbers instead of managing the problems, the entire improvement culture erodes.

If your facility reports high OEE but no one believes the data, you have a deeper issue than equipment performance. Start by aligning on what your numbers actually mean. The difference between planned and unplanned downtime is not just about categories. It is about culture, accountability, and whether your data tells the truth.

How to Break Into Controls Engineering

I keep seeing the same question pop up on Reddit and other forums.

How do I get into controls? How do I learn PLCs, HMIs, SCADA, or even get a job in automation at all?

Let me start with a brief story.

When I graduated with a degree in electrical engineering, I had never even heard of manufacturing control systems. My coursework focused on power systems, renewable energy, and basic control theory using MATLAB. I had taken classes in drives and power electronics, but I had never touched a PLC or seen an HMI screen. The term SCADA might as well have been a typo.

Still, I applied everywhere I could. I managed to get through the interview process at Procter and Gamble and landed my first job in controls. No prior automation experience. No specific training in manufacturing. Just enough foundational understanding to show I could learn quickly and add value.

That story is not typical. Most companies are not Procter and Gamble. They do not run structured programs for new grads. They do not have months to get someone up to speed. They want candidates who already know the basics of control systems and can contribute early.

So how do you set yourself apart? Here’s what I would do today if I were trying to break into this field. You can tackle these in parallel, depending on your availability and learning style.

Figure 4 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | Break Into Controls Engineering in 4 Steps

Figure 4 - Numbers, Narratives, and the Plant Floor: Building Trust in Data from the Ground Up | Break Into Controls Engineering in 4 Steps

Learn Control System Fundamentals

Control systems are not complicated. Most applications on the plant floor involve simple digital logic. Inputs turn on outputs. Sensors trigger actuators. That is the majority of what happens behind the scenes.

But that does not mean you can ignore the rest. You should understand how a PLC works, what scan time is, how logic is structured, and how safety circuits tie into the overall design. You should be able to sketch out how a basic machine runs, from pushbutton to motor starter.

Pick a Platform and Go Deep

This is the advice I wish someone had given me earlier. Choose a platform you see often in job postings and learn it thoroughly. Whether it is Allen-Bradley, Siemens, or something else, your goal is to get comfortable with that ecosystem. Understand the hardware lineup. Learn the software interface. Know the basics of troubleshooting, wiring, and communication.

It is tempting to play with open-source PLC simulators or Arduino projects. Those can be fun and helpful for concept development. But hiring managers and integrators are looking for experience on the tools they actually use in production. Learn what makes those platforms different and how they are typically applied.

If the cost of hardware or software is an issue, do not give up. Reach out to local distributors. Explain your situation. Many are willing to loan out equipment or give you trial access to software. That one conversation may also open the door to your first internship or freelance project.

Learn the Industry Context

Controls engineering does not exist in a vacuum. It always supports a process. That process can be automotive assembly, food packaging, chemical blending, or something entirely different.

You do not need to become an expert in every industry. But you should become curious.

If you want to work in automotive, learn about takt time, just-in-time production, and the role of robotics. If you want to work in food and beverage, learn about washdown requirements, packaging lines, and batch processing. If you are applying to a specific company, understand their core products, production volume, and common bottlenecks.

This kind of contextual knowledge helps you speak their language. It shows that you are not just a tech enthusiast, but someone who wants to solve real manufacturing problems.

Final Thought

Breaking into controls engineering is not easy. But it is absolutely doable if you take a structured approach. Learn the fundamentals. Choose one platform and master it. Ask questions. Get your hands on real tools. Show genuine interest in the industries you want to support. No one expects you to know everything. But they do expect that you have put in the effort to understand the basics and build a foundation.

And when you do land that first role, remember that your career is not built on one ladder. It is built on momentum. Take the next step, then the next. With time, your questions will change. Your confidence will grow. And the door you opened with one project will turn into a long, rewarding career.

Conclusion

This issue was about more than data. It was about truth. We looked at why so many digital initiatives fail to deliver value. It is not because teams lack tools. It is because the reality of plant operations is often hidden behind inconsistent labels, outdated systems, and well-intentioned edits that shift the story without solving the problem.

Here are the takeaways to keep in mind:

  • Data is not useful unless it is trustworthy. That means clean inputs, consistent definitions, and aligned teams.

  • You cannot fix what you do not understand. Spend time on the floor. Watch the flow. Talk to operators. Then check the numbers.

  • Metrics are only as strong as the culture behind them. When teams adjust data to protect KPIs, improvement stalls.

  • Career growth in controls starts with focused learning. Pick one platform, build something real, and connect it to the process.

Whether you are building a system, running a line, or just getting started, progress begins with clear signals and honest feedback.

See you in the next issue.

Get in Touch