OilComm and FleetComm Case Studies Introduction

Every year, the OilComm and FleetComm event team works with an advisory board of industry experts to create and develop a conference program for attendees. Our 2018 advisory board — consisting of executives and engineers from BP, Chevron, Anadarko Petroleum, Diamond Offshore, and other companies in the sector — puts forward suggestions for speakers, as well as relevant research on cutting-edge technology, helping us determine the scope of our program. Below are a handful of some of the most fascinating case studies we’ve read this year, featuring work from companies that are represented on the 2018 OilComm and FleetComm program. The topics of these case studies will be discussed in detail at the event.

Case Study 1 — AI-Powered Rod Services And Inventory Management Solutions for Rod Lift Consulting (RLC)

Published by HyperGiant

“Technological change is never an isolated phenomenon. This revolution takes place inside a complex ecosystem, which comprises business, governmental, and societal dimensions. To make a country fit for the new type of innovation-driven competition, the whole ecosystem has to be considered.” - World Economic Forum Founder and Executive Chairman Klaus Schwab

RLC, a rod services company acquired by Schlumberger, has long been encumbered with fragmented spreadsheets and a ton of valuable data that hasn’t been used effectively. The entire process is tedious, time consuming, and is held as domain knowledge by a single employee. The entire workflow is prone to error and requires an excessive amount of redundant data entry. Ultimately, our client was bottlenecked by an old-school process, which hindered its potential of servicing more clients and pursuing more value-based work.

Our team at HyperGiant took a design-centric approach in tackling this problem. We designed a tool that took paper-based, manual processes and digitized them in a way that streamlines the entire rod servicing process, creating a pool of accurate data in the process. We also helped standardize our client’s operational processes by helping them reduce their reliance on spreadsheets in favor of a real-time database, eliminating nearly 50 percent of manual tasks in the process. Additionally, we built the product as customer-facing, so their customers could log in and view their inventory and place orders in real-time, whenever they wanted. Our goal was to empower RLC to tackle more value-based tasks, service more customers, and scale geographically without being bottlenecked by an old-school, conservative process.

We redesigned the entire rod management program with the intent of consolidating all data in a centralized way. We embedded each step — either spreadsheet based or paper based— of the rod management workflow inside the application and opened it up to our client’s customers, ensuring that key fields are auto-populated without the need of human input. Our idea was to help our client save time on redundant data entry, while giving real-time insights to their customers when they needed it.

RLC’s dashboard is a view of the most important metrics for each of their customers. In this case, it was the total number of guided and slick rods by size, the number of trucks out for inspection, and the location that the inspection took place. Once clicked, RLC could dive deeper into the customer’s inventory and complete the applicable form requirements within that customer account. RLC’s customers would also get visibility into their inventory in real-time. Their password protected account would allow them to view inventory, their order history, and easily manage their communication with RLC within the application. Additionally, customers would be able to place new orders and download the required documentation in the application easily.

Case Study 2 – How BP Used Spiral Suite in the Amazon Web Services (AWS) Cloud

Published by AWS

A few minutes can make all the difference when it comes to getting the most value out of the eight petroleum refineries operated by the global energy company BP.

BP, which employs 75,000 people in 72 countries, extracts about 3.3 million barrels of oil equivalent each day. It falls to the company’s downstream segment to decide which of these feedstocks — which vary widely in terms of quality, yield, and cost of extraction — should be used to manufacture the many different fuels, lubricants, and petrochemicals that a petroleum refinery can produce.

“The prices of crude oils and the market conditions for products made from them are constantly changing,” says BP Refining Technology and Engineering Manager Troy Darcy. “To run our refineries as economically as possible, the downstream segment needs to be able to make extremely complex decisions quickly and accurately.”

To support this crucial decision making, BP downstream was using Schneider Electric Spiral Suite software to run linear programming models involving complex calculations based on thousands of inputs. But the company couldn’t take full advantage of the software’s potential power because of the long processing times resulting from its deployment in the company’s on-premises data centers.

“Even less complex analytics could take three days to set up and execute,” says BP Chief Information Officer (CIO) for U.S. Fuels Value Chains Murtaza Sitabkhan. “Also, employees running these analytics worked copies of databases downloaded onto their own computers, increasing the risk that human error might lead to diverging versions of the data.”

Today, these same analytics are being executed in minutes, not hours, and BP refineries use standardized processes to connect to and make decisions based on centrally stored datasets. What made the difference? Shifting Spiral Suite to the Amazon Web Services (AWS) Cloud.

The decision to deploy Spiral Suite on AWS was a natural step in the company’s cloud journey, which has been shaped by a 2016 decision by BP to adopt a dual-cloud strategy based on AWS and Microsoft Azure. Even before moving Spiral Suite to AWS, BP had seen substantial benefits from shifting Systems, Applications, and Products (SAP) and other workloads to the AWS Cloud. “We love collaborating with AWS because their people bring really fantastic engineering capabilities,” says BP Downstream Segment CIO Claire Dickson. “We’re running five SAP systems and have production systems on 6,000 virtual machines on AWS, and — to give just one example of the improvements we’ve seen — our patching is now orders of magnitude faster than on premises.”

BP is running Spiral Suite on Amazon Elastic Compute Cloud (Amazon EC2) instances. By using AWS Auto Scaling, BP can increase or decrease the number of Amazon EC2 instances Spiral Suite is using as calculation nodes on demand, ensuring Spiral Suite has access to as much processing power as necessary during complex calculations while avoiding paying for resources when they aren’t needed.

For BP, the biggest benefit of the move to AWS is just how much faster Spiral Suite can now execute complex calculations. “With Spiral Suite running on AWS, a problem that once would have required about seven hours of calculation time completes in less than four minutes, which helps us adapt to market changes in almost real time,” says Darcy.

The ease of accessing additional processing power means BP analysts have time to run even more analytical processes in parallel, greatly increasing their confidence in the decisions that the solution recommends. “By running Spiral Suite on AWS, we can analyze a model with hundreds more outputs in a fraction of the time it used to take us to run a model with just one output,” says BP Refining Technology Group Business Integration Manager Leslie Rittenberg. “The performance improvement resulting from running Spiral Suite on AWS helps us come up with more rigorous answers, subtract a lot of risk from our decision making, and capture much more value.”

BP is also using Amazon Simple Storage Service (Amazon S3) to centralize the collection and storage of the data that supports its decision making. Amazon S3 delivers 99.999999999 percent durability, provides comprehensive security and compliance capabilities, and offers query-in-place functionality on data at rest. Among other benefits of using Amazon S3, BP employees no longer download decision-making data to their own workstations, eliminating the risk of having diverging copies of datasets.

“Previously, the processes to maintain data alignment and avoid diverging copies depended on people remembering to follow procedures, with all of the risks that entails, and each site was locked into different ways of doing things,” says Darcy. “By storing our data in Amazon S3, we can rely on centrally managed systems to maintain a single source of truth for company data. That makes it easier for us to replicate business processes, share best practices, and have more robust conversations across our worldwide refining organization.”

With the project to deploy Spiral Suite on AWS, BP has chalked up yet another win for its all-in cloud strategy — a strategy that aims to do quite a lot more than reduce operational overhead.

“With the AWS Cloud, we knew we would get a reduction in our operational costs, but the real value we’ve gained has been business innovation beyond Information Technology (IT),” says Dickson. “When you can dissolve the barrier that often exists between your IT organization and the business, that’s when magic really starts to happen. The speed and elastic capacity that we get from the AWS Cloud — for both Spiral Suite and for the many other workloads we are now running there — are massively changing and transforming how we operate today.”

Case Study 3 —How NCC Group helped a Global Energy Services Organization Protect its Investment in the Cloud.

Published by NCC Group

In this case study, our client is an energy services company specializing in the industrial markets, petroleum manufacturing and oil fields, with offices across the globe. Our client had invested more than $100 million in a cloud-hosted system essential to the business’s overall functions. It recognized that its mission-critical data held within the system was its most important asset and was looking to ensure this data could be extracted in a suitable format and used elsewhere should there ever be a vendor failure.

Our client therefore wanted to work with a software assurance expert that offered a flexible and bespoke service, as well as technical and legal guidance and support. After reviewing the current provider market, the client chose NCC Group as its assurance partner.

NCC Group recommended a solution whereby the client would be able to access and extract its specific data held in the system should there be a vendor failure. A tri-party agreement was set up between the client, the vendor and NCC Group and delivered as part of our Software-as-a-Service (SAAS) Assured solutions.

When the engagement began, NCC Group opened a communication channel between the client and cloud vendor. Our in-house legal and technical teams worked in parallel to negotiate terms and conditions, discuss the purpose of verification exercises and the steps involved to produce a suitable agreement that both sides approved of. With regular and direct dialogue between the client and the vendor, we were able to negotiate any complexities with the client’s attorney and ease any concerns from the vendor.

By providing our Data Extract Verification service, we were able to assure the client that an up-to-date copy of its data could be accessed in a suitable and reuseable format. A supporting SAAS Operational Maintenance Verification will give the client documented processes and procedures, which are required to operate, maintain, and run the hosted SAAS system in the event of vendor failure.

Evaluating the project, our customer stated that “[NCC Group] representatives were able to identify our needs and recommend solutions that provided a comprehensive, robust continuity plan. We were able to rely on NCC Group’s legal team to help with the redline process, while its technical team ensured the verification exercises were completed successfully and in a timely manner.”

Case Study 4 — Technology to Enable Lean Oil and Gas Extraction

Published by HyperGiant

Similar to any other large enterprise, our client — an unnamed oil and gas production company — spent decades collecting large amounts of data about their people, processes, and customers within a fragmented suite of applications and data sources. As the organization grew and the need to leverage software became critical, our client found themselves taking an ad-hoc approach to technology, creating transactional applications for each business problem.

The team at HyperGiant came in and helped craft an Application Programming Interfaces (API) strategy for the client, intended to help the organization balance the needs of the business and the needs of their employees. We held a two-and-a-half day workshop with more than 30 of the client’s stakeholders across all functions, and spent the next six weeks researching and recommending improvements for their software architecture. This culminated in a roadmap that has reduced our client’s operational costs by 22 percent.

Our engagement began with a deep, research-intensive phase into the client’s technology and application ecosystem. After hosting a workshop and several in-person interviews with stakeholders from both the business functions and the technology functions, we learned that well-cost estimation was a problem that had the opportunity solve several other horizontal pain points within the organization.

For the definition phase, we worked with a team of drilling engineers and identified their pain points and inefficiencies for the well-cost estimation process. Having found that the entire process was happening through Excel sheets, manual calculations, and PowerPoint presentations, we decided that a web application with access to every actual well-cost could automate calculations with a higher degree of estimation accuracy and speed up the process for drilling engineers sitting in an office and company man working on a rig.

For the delivery phase, we fused our client’s domain knowledge with Agile best practices and started to build out the web application in two-week sprints. As we were developing the solution, we kept close ties with drilling engineers and people from the company to ensure that the application was closely aligning estimates to actuals — and closing the delta in between. Ultimately, we created an application that allowed engineers to create a well plan based on historical best practices, refine time and cost, and monitor real-time data once the well was spud. VS

previousTwilight Arrives for the US Video MarketnextSpace 2.0: Taking AI Far Out