Home / Blogs

How I Learned to Stop Worrying and Love InBatch

Do you remember the now classic scene in The Lion King where a group of hyenas are saying the name of the king, “Mufasa,” to each other and the mere mention makes them shudder? “Ooooh! Do it again!” In my experience, there is an automation product that has the same effect on engineers and customers alike. What product is that? Wonderware’s  InBatch! No other software product has made my friends and colleagues shiver and shrink.
So why does InBatch instill “Mufasa” level dread? Oddly enough, the main reason InBatch is feared is the same reason it’s worthwhile. It’s powerful. InBatch is an incredibly flexible product that can do just about anything imaginable in automation and recipe management. But the only way to make it so powerful is to make it incredibly configurable. And that means that InBatch requires a pretty steep learning curve before that potency is accessible. People just don’t know how to use it. And when they try, their efforts often go poorly. It has taken me years and multiple projects to really get my arms around this software and figure out all the ins and outs and how-tos and what-to-dos. And now I feel like I am there. I am on board. The power is mine!
Over the past year and a half, I’ve been working on multiple projects with one of our partners who has had a long history with InBatch and has solid standards regarding how their installations operate. Their standards are highly integrated with System Platform, InTouch, and all the other Wonderware products. Their visualization approach prominently presents what is running and how efficiently without having to use the standard Environment Display tools. And the best of all, it works really well.
So for a year and a half, I’ve been living and breathing InBatch. I’ve been leveraging what we’ve done in the past. I’ve been learning from what our partner is doing. I’ve been adapting what I know for new project challenges. The thought of the next InBatch project no longer fills me with terror. I’m looking forward to the next opportunity to use this powerful software.
So do you fear InBatch? Tell us about your experience!


Reduce & Manage Unscheduled Downtime

Throughout my years working in systems integration – technician, engineering, management and owner since 1976, I have experienced the many downfalls of downtime and the snowball effect it causes for everybody involved. I have worked extensively in the Life Sciences industry providing validated designs and installations, unscheduled downtime no matter what industry can be frustrating and expensive.
Unscheduled is the key here…as most production systems experience some form of downtime (both scheduled and unscheduled).  The big killer in the unscheduled downtime is the reaction time to notice the issue, diagnose the problem and solution and then to mobilize to correct the problem.  In scheduled downtime the solution is usually defined and the mobilization is planned before the system goes down, thus the unproductive time is minimized (= less lost revenue).
Whether a downtime tracking program or a manual means of listing the typical downtimes is used, the goal is the same:  predict the types and occurrences of unproductive machine time, allowing for the preparation of shorter and less chaotic machine interruptions.  This all equates to higher productivity from the same resources and a more consistent quality product.
Here are some other ways to eliminate unscheduled downtime, link .
By: Duane Grob

Just How Important is Cyber Security in Control Systems?

NERC (North American Electric Reliability Corporation) held its second grid security exercise, or GridEx, over a two day span. During this exercise, nearly 10,000 electrical engineers, cybersecurity specialists, utility executives and F.B.I. agents wrestled with an unseen, virtual “enemy” trying to disrupt the electrical infrastructure in the U.S. It included simulated computer viruses, line and equipment damage and even first-responder deaths in an effort to understand and evaluate participants abilities to understand, communicate and neutralize a multitude of simultaneous threats.
This type of exercise is important for those people and organizations involved in securing our cyber infrastructure to help gain a real-world(ish) and real-time analysis of the structure and procedures in place guarding these assets. As shown in a recent Control Engineering feature article, the ability of cyber intruders to gain access to networked control systems might be easier than previously anticipated. Their cyber security experiment revealed that the lesser skilled and inexperienced hackers did not realize that this was a “honey net” or fake asset used to lure them, were able to find, access and manipulate these fake municipal water utility network control systems.
As a result of technological and geo-political changes, some industries have made changes in the form of regulations to put specific requirements in place around critical infrastructure security. Many of these industries, such as power generation, nuclear, chemical and water, are maybe obvious institutions where such focus on security is warranted. Regardless of your opinion of the likelihood of cyber and infrastructure attacks, most will agree those groups represent the likeliest of targets. With the goal of such attacks being to strike fear, disrupt everyday life and cause physical and economic damage. Especially when weighing all of those potential repercussions to the population at large, one can understand the reasoning behind these regulations. But where does that leave other industries that have similar infrastructure, technologies, and presumably, security gaps?
What are the security risks and potential consequences associated with a pudding or ibuprofen manufacturing line? Unless the process consists of superheating a vessel or something similar, the chances are probably very low that any significant physical damage or destruction might result. That leaves the most likely consequences revolving around a bad batch or amount of product based on changes that were made to things like set points and other quality-influencing parameters. The chances that such bad quality would actually leave the plant and make it’s way to the consumer is fairly low with the quality procedures most companies implement. Therefore it’s left mostly to a corporate sabotage-type motivator to cause them to create and scrap a lot of waste product (or unnecessarily consume raw materials, etc.). While the loss of a batch or materials might have some real cost significance, because that threat is solely based on strictly wanting to financially impact their target, the likelihood that someone would be skilled and motivated enough to pull off such an act, is perhaps relatively quite low.
So do those industries then not concern themselves with cyber security? Is the low potential for motivation and the ‘havok’ that can be caused reason to say that the costs of securing systems outweighs the risk they are protecting against? Or does the fact that people out there can access these systems and ‘do bad things’, justify the costs associated with keeping these assets secure?
By: Brian Fenn

Fielding a Winning Team with Home Grown Talent

Writer: Nic Imfled
I love baseball. There is nothing quite like cheering your team toward postseason success. But success isn’t easy. Teams have to be built. In order to achieve the ultimate goal of a world title you need a healthy blend of veteran players and home grown talent. The same is true with successful companies. Over Avanceon’s 30 years in business, we’ve sought to build and maintain a team with the proper blend of seasoned veterans and developing all stars.
As I’ve thought about this recently, I’ve looked at our team and realized something. We’ve got a lot of great seasoned, veteran engineers and we’ve got a great crop of young talent. But what caught my attention is that a lot of our veterans are home grown. We’ve had great success hiring young engineers as interns still in college or into entry level positions fresh out of college. Most of these young engineers stay with Avanceon for years and grow to be some very valuable and experienced members of our team.
I was one of those young engineers who was hired and grown. I was brought on board back in 2000 as an intern through Drexel University’s co-op program. I continued at Avanceon through subsequent internships and worked part time through the school year. In 2004, I came on as a full time entry level engineer. Over the following nine years, I’ve grown by leaps and bounds in my knowledge of the automation industry, how to meet our customer’s needs, how to execute projects, how to work on a team, etc. I’ve risen through the ranks and am now a level IV (of IV) engineer in our Pharma/MES group. I’m not the only engineer like me at Avanceon. Perhaps up to 50% of our engineers are home grown.
Why have we had success in raising up young engineerings? I think there are two reasons:
First, and most importantly, with a young engineer we have an opportunity to train them into our culture and values. For example, our culture is built on creative out of the box thinking that looks for ways to optimize our work tasks for efficiency. Our culture is also built on standardization and teamwork. Taking the time to sculpt and mold young engineers from the beginning of their careers is priceless.
Second, evolving expectations: While we mold our young engineers our expectations grow over time. When they are at the beginning of their career we don’t expect them to have all of the answers, we just want them to learn and grow. As time goes by we increase the expectations of knowledge but continue to hold their hand making sure that they succeed. During this process the engineer feels nurtured while at the same time the company is gaining value by shaping the young engineer into a future seasoned veteran. Mutual goals equal mutual success.
We’ve seen this success with young engineers over many years and we are looking to continue it for many years to come. Does that mean we no longer look for veteran talent? Absolutely not. In fact, we’ve recently brought on a few veterans that we are really excited about. And we are even looking for a few more. But it does mean that we are taking advantage of multiple avenues available to us to prepare, grow, and field the best possible team we can for our customers, just like a championship baseball team.
Next time we come out to work on one of your systems, ask your engineer about his experience. You might find that he is one of our home grown all stars.

Upgrading PLC5 Automation Platforms to Logix5000 Pt 2

This is the second in a 2 part series highlighting the key decision points and recommendations for a successful upgrade from PLC5 Automation Platforms to Logix 5000. In Part 1 we discussed the need for planning prior to the project start. In Part 2, we will discuss the software and interface concerns as well as the necessary training.

Hardware Approach

There are several aspects of the hardware platform to consider when beginning an upgrade program. If there are multiple systems to be upgraded over time, a uniform approach resulting in minimized spare parts inventory is best.
Determine whether field wiring is run directly to the I/O modules, or to terminal blocks in the panel. If the field wiring goes directly to the I/O module, it may be more efficient to use Allen-Bradley’s conversion kit for wiring arms. This allows use of the existing 1771 wiring arms, mated to a special adapter, which then connects to the new 1756 hardware underneath.
While simpler and quicker in the short term, this method results in a panel that’s harder to maintain in the long run, as the wiring is harder to trace, and some of the hardware is more difficult to access, as it is hidden by the adapters.
If a slightly longer downtime window is available, and if the I/O is wired to terminals in the panel, a good option is complete replacement of the old PLC5 hardware with new ControlLogix equipment, using pre-wired harnesses that snap onto the I/O module and run as a single cable to the termination points.


It is easy to focus on the PLC itself as the target of the upgrade, but all PLCs have interfaces of some sort, whether HMIs, historians, other PLCs, drives, etc. These interfaces need to be modified to be compatible with the new PLC as well. First, consider whether any outdated networks, such as Data Highway Plus or Remote I/O, are being used. If so, now is probably the best time to replace them with Ethernet.
For HMI or SCADA interfaces, there will likely be a tag database that needs to be updated. PLC5 addresses are of the format “N7:1/0”, while the new ControlLogix arrays will look like “N7[1].0”. Depending on the HMI package, it may be possible to export the tag database to a text file which can be manipulated with search and replace, then re-imported. If not, it may be necessary to change every single animation point within the HMI package.
Any messaging in other PLCs will need to be modified to communicate to the new ControlLogix. Especially note that some older PLC5s are not able to communicate via Ethernet to ControlLogix PLCs at all, and will require a firmware upgrade to communicate.


If the site maintenance staff is unfamiliar with the ControlLogix platform, some form of training should be considered. In addition to formal instruction, it is useful to set up a PLC as a ‘sandbox’ that can be used for self-directed study and familiarization.


Upgrading PLC5 to ControlLogix is usually a straightforward effort, but it is tempting to over simplify by not considering the ancillary work required to make the project successful.

Upgrading PLC5 Automation Platforms to Logix5000

The need to upgrade existing PLC5 (or SLC500) systems to Logix5000 has become more pressing with Allen-Bradley’s assignment of the PLC5 family to its Silver Series. Silver Series is Allen-Bradley’s indication that while components are still available for purchase, they will soon be discontinued.
Considering the universal popularity of the PLC5, it can be assumed that the demand for replacement parts will very quickly outstrip the limited supply once new units are no longer available for sale.
Fortunately, in most applications, the upgrade path to Logix5000 is straightforward, provided some planning is done in advance. This 2 part blog series will outline some of the key decision points and recommendations for a successful upgrade program.

Project Planning

In most cases, the upgrade will have no net effect on the operation of the system, which is to say production rates, OEE, quality, and other metrics will be the same before and after the upgrade. This means that the primary criteria for selecting and prioritizing upgrade candidates are size, complexity, interfaces (discussed in more detail below), and available downtime windows.
It is most efficient if one (or more) upgrade teams consistently perform the work throughout a facility.
It is critical to understand beforehand exactly what needs to be considered in the upgrade. HMIs and other clients will need to be adjusted to communicate to the new PLC. Other PLCs connected to the upgrade subject will need to have Message instructions modified. Remote I/O or DH+ networks for drives, remote racks, and other devices need to be replicated, or upgraded to Ethernet.
Unlike a typical automation project, the design and testing phases for an upgrade can be significantly curtailed. When the Allen-Bradley software conversion utility is used, there is minimal development of new code, and the programmers doing the upgrade do not need to be expert in the process application.
This is a double edged sword, because while the engineering costs will be greatly reduced as compared to a typical project, it also means the automation team will not have the same level of familiarity with the system as if they had developed it from scratch. For this reason, it is very important to have plant personnel who are familiar with the operation of the upgraded system assisting with the startup. In part 2 we will discuss the software and interface needs as well as the training necessary for a successful upgrade.

Serialization’s a-Comin’!

With the start of 2013 we take a moment to anticipate what changes lie ahead for our industry and what impact they will have in the new year and beyond. Have you considered the impact that Serialization will play?
Serialization is to play a huge role in reducing the global proliferation of substandard, spurious, falsely-labeled, falsified, and counterfeit (SSFFC) drugs. SSFFC drugs can lack active ingredients, include incorrect ingredients, not have enough active ingredients, or even have too much of an active ingredient. Due to the clandestine and illegal nature of the manufacture and distribution of SSFFC drugs, the global breadth and scope the SSFFC problem is hard to quantify and harder still to contain and stop. What “IS” known is the public health risks of using SSFFC drugs. Here are a few:

  • Illness or injury treatment failure
  • Increased resistance to disease or virus infection
  • Sickness and death


What is Serialization?

Typically, an individual drug carton has a label with the drug name, company’s name (or logo), lot number, and expiration date. In order to prevent SSFFC drugs from being manufactured, distributed, and used there must be a way to determine that a drug ‘unit’ (an example would be a small carton containing a bottle of eye drops) was made by the drug manufacturer listed on the label and that the contents match the label. Other than looking at the label (which could be a counterfeit), how can anyone really know that the drug is what the carton and label say it is? An electronic pedigree (e-pedigree) is one way to really know.
An e-pedigree is a verifiable file that tracks the ownership of a drug from initial manufacture through the supply and distribution chain all the way to a pharmacy, hospital, or doctor’s office. This e-pedigree also includes returns, recalls, and the proper disposal of drugs that have passed the expiration date. In order for the e-pedigree to be effective, it must track and trace the drug down to the smallest package or saleable unit/carton. In this manner regulating agencies can determine not only where a unit/carton originated, who had it and when, but can also identify if a counterfeit unit/carton was inserted in the supply and distribution chain.
What is the mechanism to differentiate each individual package or saleable unit/carton from each other? One mechanism is Radio Frequency Identification (RFID) tags. RFID tags can certainly provide the ability to track and trace an individual unit, and RFID information on unit/carton whereabouts can be added to an e-pedigree; however, an RFID solution requires special equipment to make and program the RFID tag, affix the RFID tag to the unit/carton, and read and track the RFID tag at points in the packaging line. The orientation of the tag placement – as well as the tag reader with respect to the tag placement as it traverses the line – is also critical. In order to make a determination of the validity of the unit/carton later in the distribution chain, an RFID tag reader would be required.
Serialization is the generation of a unique identification number that is added to the unit/carton, case, and pallet labels in the packaging line. The inspection stations keep track of the individual serial numbers on the unit/cartons and provide that information to the track and trace e-pedigree system. Although a serialization e-pedigree solution will be costly to implement, the cost impact is mitigated by the fact that packaging line printers and inspection stations already exist; labels are already being affixed to cartons, cases, and pallets.
To read the complete article please visit here
For additional information in reference to Serialization and how Avanceon can help support you in the process please email