I’ll apologize in advance for the slightly scattered nature of this post. This is my brain dump of all the really cool stuff I saw and heard at Ops Manage this year.
Before I begin, some of these are items that were publicly discussed in canned presentations while others I picked up in conversations with some of the powers that be. Anything that wasn’t part of a public discussion I’ll mark with ** so don’t go asking around when feature XX might be released, you may get a denial the particular feature ever existed or has been discussed. Also, the screenshots I’m including are from a beta release so if they change slightly on the production release don’t give me a hard time.
1) Right out of the gate, support for VSPhere 5! I talked with Rob Kambach for a while about this one. They have completed a battery of tests and found no issues. At this point they need to go through a documented/formal testing regiment before they officially announce support. Look for this somewhere around Q1 of next year. It also sounds like they are going to support a wide range of features such as HA, Fault Tolerance, Snapshots, etc. They are actually publishing a 700+ page document on Virtualization and High Availability for System Platform. Most of it is Hyper-V focused but there’s a lot of good information in it. I’ve read through parts of it from the beta version and I definitely recommend it. Also, Brent Humphreys and I were having a discussion a while back about how we’d configure an RMC between machines running in two different datacenters. We speculated setting up a dedicated VLAN for RMC traffic “should” work. Well, in this document they address the issue and confirm that VLAN’’s are supported for all node to node communications, including RMC traffic.
2) Lots of support for new Server 2K8 R2 remote features. Once of the coolest new features in 2K8 R2 is the concept of remote apps. Think terminal services where the app is running on a remote server, but instead of immersing yourself in a complete remote desktop, you run the app from your local machine. Just double click and icon and you think the app is running on your local machine. What’s actually happening is that the app is running back on the server and it’s using something like RDP technology to serve up the graphical portion to your computer and interact with your clicks. This is really really cool stuff. Here’s the first link I could find on the Microsoft website about this technology.
3) Skelta/Workflow is now a first class citizen. Once you install it, all of your objects will have a workflow tab. How would I use this? Say you want a supervisor to be notified every time a HH alarm with a priority < 100 goes off with your analog objects. You can configure a workflow on your template that sends this notification and waits for the supervisor to acknowledge the alarm before the operator is allowed to acknowledge. I’m expecting some really really big things from the new workflow engine.
4) Tons of improvements around E-Signatures. The biggest one is that you can split out the verifier function. Before you had no good way to limit who could be a verifier. That’s why we ended up writing our own prompting object that built in all of these features. We’ve had secured and verified writes for a while now.
What we haven’t had is a good way to control who can verify writes. That has changed with a new operational permission called Verify Writes.
The idea here is that you would setup one group such as operators for an area and they could do the standard operator things. Then, you could setup another group for supervisors or foremen and they would have the Can Verify Writes permission. Now an operator can change a value but they have to get a supervisor to verify it. An even neater concept is the idea that someone from the quality group can have no privileges at all, except Verify Write. So now when the operator attempts to say a batch is complete and ready for further processing, the quality person could be there with them and verify the answer, essentially authorizing the action. The log entries have also been improved. You know the two people who participated in the transaction
What I didn’t see in my release was the detail that it was a verified write. I do remember, however, seeing this demoed at the conference and it looks like they’ve updated the Description column to include the fact that it was a verified write.
Another cool thing they’ve done is that it will allow you to enter operator credentials for an operator that isn’t even logged on. What’s neat about this is if the operator just needs to change something real quick they don’t have to actually log on.
Supporting all this functionality is the ability to use smart cards. Smart cards are akin to an access badge but the operator will place the card in some kind of reader on the HMI station. Then all they have to do is enter a pin number in place of a password. More secure and faster.. I love it.
Finally, there are a couple of really cool features that are similar so I’ll talk about them together. They have added script functions in the graphics called SignedWrite() and SignedAlarmAck(). The intent appears to be to allow the designer to give the operator an alternate way to enter/modify data. Once they have entered/modified data the script calls a signedwrite to attempt to write the new value to the attribute. What you can do with this, however, is to inject a pre-defined comment or pre-defined list of comments. Imagine this scenario, an operator finds a cold storage chamber out of spec. They go to adjust the set point. When they adjust the set point a signedwrite is fired. They are presented with a pre-defined list of comments they can select from. They can’t just enter “Didn’t like current temperature so adjusted”. They would only have comments like “Added Material to Load”,”Ambient Conditions out of Spec”, “Controller too Variable”, etc. In regulated industries it is critical that that operators don’t get too crazy with their comments on alarms and data entry. One wrong phrase in a comment could spin off weeks of work trying to explain it away, even if it is the truth. I think this could be one of the most underrated new features. Wow!
Here are a couple dummy calls to give you an idea how these are going to work. See some neat things on the SignedAlarmAck that you like?
SignedAlarmAck( Alarm_List, Signature_Reqd_for_Range, Min_Priority, Max_Priority, Default_Ack_Comment, Ack_Comment_Is_Editable, TitleBar_Caption, Message_Caption );
SignedWrite( Attribute, Value, ReasonDescription, Comment_Is_Editable, Comment_Enforcement, Predefined_Comment_List );
5) Buffered Data. Where do I begin on this one. Let me be the first to say I’m still a little confused. According to the help files here is what they say buffered data is
The buffered data feature enables efficient accumulation and propagation of VTQ (Value, Time, and Quality) data updates, without foldering and data loss, to data consumers such as objects, alarms, the Historian, and scripts from field devices that support buffering.
Buffered data is defined as data captured and stored locally on a remote device for later transfer to a supervisory system for processing, analysis, and long-term storage. The Buffer property is input-only.
Ok, that’s pretty clear. Seems like this is built for RTU’s and the like where the remote unit might accumulate some data and forward it on with quality and timestamps. Interesting. Only problem is the demo I saw is 180 degrees from that. The demo’s I saw were touting Buffered data as a way to collect data really really fast. Imagine you have the same value from a PLC and the object is on a 1 second scan. Here is what an overlay of buffered and non-buffered data might look like.
Here is what I think MAY be going on. The demo’s they are showing might be using buffering on the end device to put together an array of values and then forward these values on to IAS, making it appear faster. However, when I chatted with Rob K. about this he indicated that what was going on was that the data collection was running as fast as it possibly could, “out of band” (my words not his). Either way this looks like a really neat feature that could be very useful.
My thoughts on how it could be used? Two areas. First, imagine you have a piece of equipment that goes through different modes and in one particular mode it’s critical that you capture detailed information about what the machine looked like during that mode, say a pressure test. If what I was told was true***, that you could turn buffering on and off at runtime, then you could flip this guy into high speed mode during the pressure test then turn it back off after the pressure test. Another way I could see using this is for super critical data. In FDA regulated industries losing data is a huge NO NO. Only problem is that if we lose network connectivity to our PLC there is nothing we can do to recover from that. The new Foxboro PAC has some neat new features (that may actually dovetail with this) whereby it will buffer history and alarm data locally until a network connection is re-established. What about doing that with my Allen Bradley Control Logix? Maybe it detects a lost heartbeat then goes into buffer mode, maybe capturing a value every minute or some reasonable time frame to save on space. Once the connection is re-established my object hooks back up, sees there is data in the buffer, processes it, then moves on. This can even work with alarms too.
I think I’ve got a lot of reading to do on this one. I suspect the first group of folks to really figure this out could have a serious leg up from a system resiliency standpoint.
Ok, this installment has gone on long enough, back to struggling with my Silverlight App.
Next week is Turkey week so I probably won’t put anything out then. However, week after I promise another post on some new features, especially the new ShowGraphic() function.