Burr Sutter - Director of Developer Experience | Red Hat
Feeling bludgeoned by bullhorn messaging suggesting your monolithic behemoth should be put down (or sliced up) to make way for microservices? Without question, 'unicorn-style' microservices are the super-nova-hot flavor of the day, but what if teaching your tried and true monolith to be a nimble, fast-dancing elephant meant you could deploy every week versus every 6 to 9 months? For most enterprises, weekly deployments (or something close) would fundamentally transform not only operations and business results, but also the inherent value of developers willing to step up to lead the dance lessons. See beyond the hype to understand the deployment model your business case actually demands, and if weekly deployments courtesy of a dancing (or flying!) elephant fit the bill, love the one you're with as you lead the organization's journey to digital transformation!
Keynote 9:50 - 10:30 a.m.
Building Evolutionary Architectures
Neal Ford - Software Architect | ThoughtWorks
An evolutionary architecture supports incremental, guided change as a first principle across multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier.
This keynote, based on my upcoming eponymous book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture
requires understanding how different parts of architecture interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover
how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts
to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium.
Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This keynote provides a high-level overview of a different way to think about software architecture.
Keynote 10:50 - 11:30 p.m.
The Journey to DevSecOps at DHS USCIS
Rob Brown - Manager, Enterprise Cloud Services | DHS USCIS
Adrian Monza - Chief, Cyber Defense Branch | DHS USCIS
Steve Grunch - Manager, Enterprise Cloud Services | DHS USCIS
Tariq Islam - Senior Solutions Architect | Red Hat
In this session, get to know how the Department of Homeland Security's USCIS division started on their journey towards a true DevSecOps culture, enabled by the adoption of an enterprise container platform. Hear from the heads of Development, Operations, and Security to get a deeper perspective from each discipline on how they viewed and embarked upon their goal of modernizing the USCIS culture, its people, its processes, and its tools to better meet the mission at DHS. You'll learn about where they began, challenges faced, successes realized, and the strategies they used to overcome common organizational hurdles in the process towards container adoption and a DevSecOps culture.
Keynote 11:30 a.m. - 12:10 p.m.
"Failure" as Success: The Mindset, the Methods, and the Landmines
J. Paul Reed - Managing Partner
| Release Engineering Approaches
"Failing fast," "failing forward" and "Learning from failure" are all the
rage in the tech industry right now. The tech company "unicorns" seem to
talk endlessly about how they've reframed failure into success.
And yet, many of us are still required to design and implement backup
system capabilities, redundancies, and controls into our software and
operations processes. And when those fail, we cringe at the conversation
with management that will ensue.
So is all this talk of reframing "failure" as "success" within our
organizations just that: talk? And what does that look like, anyway? We'll
explore mindset, the history it's rooted in, as well as effective methods
to move your organization toward it and some land mines to avoid along the
Learning Docker and Kubernetes with OpenShift – Hands-on Workshop
Grant Shipley - Director | Red Hat
In this hands-on workshop, you will learn how to deploy and manage applications using a dedicated OpenShift 3 environment using the Docker container format and orchestrated with Kubernetes and OpenShift. Diving a bit deeper, we will learn how to use the Source 2 Image project to automatically build and deploy docker images straight from source code. After that, we will take it up a notch by learning how to add databases and scale the application to achieve fast response times for your users. At the conclusion of this workshop, you will have built a geo-spatial application backed with a MongoDB database as well and understand the workflow to build, deploy, scale and manage applications deployed using Docker, Kubernetes, and OpenShift3.
Enhance! Deploying Image Recognition with TensorFlow and Kubernetes
Casey West - Architecture Advocate, Google Cloud Platform (GCP) | Google
“Enhance… enhance… enhance…” Have you ever wondered how image recognition works in the movies, or how you can take advantage of it? In this talk you’ll find out. I’ll explain the basics of Machine Learning and Image Recognition and demonstrate how it works with TensorFlow, and Open Source library for machine intelligence. Once we have a working image recognition system I’ll show you how to deploy it in production on Kubernetes, an open source container management system.
1:00 - 1:30 p.m. - Track 2 (Balcony B)
Security Compliance for modern infrastructures with OpenSCAP
Martin Preisler, Simon Sekkide | Red Hat, Inc.
OpenSCAP and SCAP Security Guide are commonly used for fully automated security compliance of bare-metal machines. With the modern deployments gradually moving to containers and VMs it makes sense to explore how we can leverage vulnerability and security compliance scanning for these new infrastructures. Before we move to containers we will briefly discuss how scanning a single bare-metal machine works to introduce all the technology. Then we will scan a container for security compliance with commonly used profiles such as PCI-DSS or USGCB. We will go over caveats and differences between scanning bare-metal machines and containers or VMs. Possible container remediation options and plans for the future will also be mentioned. After that we will look at how we can scale up the scanning to multiple VMs and containers.
1:00 - 1:30 p.m. - Track 3 (Balcony C)
Modernizing Legacy & Mainframe to Cloud Native with Field tested patterns
Zohaib Khan - App Modernization Practice Lead | Red Hat
Government Agencies and Departments have a significant portfolio of legacy and Mainframe applications running in production. Applications that have been developed ten, fifteen years ago or more. These are mission critical applications.
It is not easy to just rip and replace them all with modern technologies such as Cloud, Microservices and Containers. The business cannot afford downtime and certainly not willing to pay for multi-year projects. An approach is needed to modernize
legacy and mainframe applications that would allow IT to continue to deliver value, while innovate from inside out. In this session, we will cover the foundations of Application Modernization. We will present field tested patterns that allows the business to minimize IT risk, no big bang approach and modernize legacy and mainframe application incrementally, while continuing to deliver business value. You will learn: - How to evaluate your portfolio of applications with a structured capability approach. - Use open-source first strategy to increase capability while driving costs down. - Setup a continuous Modernization Factory with Red Hat's automated tooling.
1:00 - 1:30 p.m. - Track 4 (Balcony D)
Modern delivery with GitHub and OpenShift: Branch Deployments made easy
Jamie Jones - Federal Solutions Lead
John Osborne - Senior Kubernetes Advocate | Red Hat
While Federal agencies are embracing many aspects of modern software development including the concepts of Agile, DevOps and Inner Sourcing, one of the more recent improvements is using workflows that prioritize independent changes. This focus has led to a new style of deployment where specific branches are deployed into isolated test environments. While this has adoption is growing in companies like AirBnB, Netflix, and Capital One, the Government has been slow to adopt these capabilities. This presentation will discuss how you can use GitHub Enterprise and Red Hat OpenShift together to bring this modern workflow to your teams with the tools and security you trust. Find out how branch deployments can improve your development team's efficiency without sacrificing rigor.
1:00 - 1:30 p.m. - Track 5 (Balcony E)
Federal Agency Pursues Business Logic at the Speed of Big Data
George Batchvarov - Solution Architect | NCI Inc
The federal government processes billions of forms each year based on a complex 74,000 page set of rules and regulations that grow by more than 145,000 words annually. Striving to provide a responsive and positive customer experience, the agency is challenged by the volume of data, short receiving and processing times, and ever-changing laws. The target solution processes data on arrival and allows changing business logic on the fly. Data processing must scale up and down while integrating with all necessary internal and external systems. In this session, we present the operating prototype of our target architecture, which runs jBPM and FUSE on top of JDG — and achieved almost linear scalability. You will learn how to implement jBPM on JDG in order to achieve massive parallel in-memory processing, leveraging FUSE to support flexible integration with any type of system.
1:40 - 2:10 p.m. - Track 1 (Vista)
Removing Infrastructure Bottlenecks from Your DevOps Process with Ansible
Steven Carter | Red Hat
Ansible is a simple, but powerful automation tool with an agentless footprint that allows for the definition of architecture, intent, and policy as code that can be deployed across both on-prem and cloud infrastructure. This enables customers to extend their enterprise and applications into AWS in a way that maintains a consistent, secure posture as part of a continuous delivery pipeline. Customers can then natively integrate with AWS to seamlessly configure and deploy a range of AWS services such as Amazon Aurora, Amazon Redshift, Amazon EMR, Amazon Athena,
Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing from within Red Hat OpenShift across a secure, consistent hybrid cloud infrastructure. In this session, we will demonstrate how infrastructure can be instantiated with code as part of a continuous delivery pipeline and describe how that integrates with an OpenShift hybrid cloud deployment.
1:40 - 2:10 p.m. - Track 2 (Balcony B)
OpenShift, FICAM and Compliance - Identity Management DevOps
Marc Boorshtein - CTO | Tremolo Security, Inc.
Compliance is often the first question for anyone implementing Red Hat OpenShift but also the hardest to answer, especially in federal agencies. An even harder question to answer is how to implement FICAM compliance for OpenShift, but the question is really about bridging the gap between people who write compliance requirements, people who audit those requirements, and people who implement the technology. In this session, targeted to security specialists responsible for reviewing OpenShift deployments and those trying to build a compliant solution with OpenShift,
I’ll provide a map to help explain what compliance really means, how OpenShift is deployed, and how OpenShift technology is implemented to meet compliance requirements, including examples from National Institute of Standards and Technology (NIST) 800-53, NIST 800-63 and FICAM, mapped to a technology implementation. This map will help auditors better understand the compliance of FICAM in OpenShift. The content for this session is based on my blog post: tremolosecurity.com/openshift-compliance-and-identity-management/ and will be similar to my session at Red Hat Summit
(https://rh2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=104939&tclass=popup) but with a federal focus on FICAM.
1:40 - 2:10 p.m. - Track 3 (Balcony C)
Lessons from supporting a software modernization journey
Scott Jaffa - Solutions Architect | ValidaTek
The federal IT space is under pressure. In addition to efforts to modernize large, legacy applications, there is also a need to deliver new IT systems on time and on budget. However, rarely is the solution as simple as hiring a dev team, giving them a set of requirements, and having a system delivered. Organizations need to modernize their tools and workflows to support these efforts. In this talk we will look at devops practices and lessons ValidaTek identified as key to enabling software projects to success with our federal clients.
It covers practices and solutions covering the systems around development, including version control, testing, configuration management, platform automation, and communication, including how they enable effective software development practices. Attendees to this talk will come away with some specific practices to consider as they work to improve software project delivery in their organization.
1:40 - 2:10 p.m. - Track 4 (Balcony D)
Planning, creating, and deploying hybrid cloud services with OpenShift.io
Burr Sutter - Director of Developer Experience | Red Hat
In this session you will learn how OpenShift.io, combined with OpenShift online provides an integrated approach to DevOps. See how OpenShift.io can help minimize the time and effort it takes to build and maintain an end-to-end development tool chain as well as create containerized, dev, test, and staging environments.
1:40 - 2:10 p.m. - Track 5 (Balcony E)
Modernizing Legacy Application Portfolios with MongoDB
David Koppe - Director of Information Strategy | MongdDB
While new applications can adopt modern, agile technology stacks easily, many organizations are encumbered by legacy applications with significant technical debt. These applications can inhibit innovation and consume significant resources. Learn how MongoDB has helped some of the world's largest organization modernize their application portfolio to drive down costs while fostering innovation and agility.
2:20 - 2:50 p.m. - Track 1 (Vista)
Containers Minus the Unicorn Blood
Jamie Duncan - Cloud Guy
| Red Hat
Containers are a developer's best friend. Quickly developing your code, dropping it into a container and running it anywhere you like can be intoxicating. But what happens to that container once it leaves your laptop? What do your Ops teams have to know, and what do they have to do to keep your containers happy and healthy and scaled up for production? This is what we will be talking about. Using practical examples, we will talk about what actually happens to create a container in OpenShift and how it all works inside Linux.
2:20 - 2:50 p.m. - Track 2 (Balcony B)
DevOps and Security: Lessons Learned From Detroit To Deming
Derek Weeks - VP and DevOps Advocate | Sonatype
In 1982, the city of Detroit saw 15,000 vehicles roll of its production lines every day. To achieve this goal, Detroit's line workers were being measured on velocity, often at the expense of quality. At the same time, auto workers in Japan -- applying lessons from W. Edwards Deming -- were implementing new supply chain management practices which enabled them to manufacture higher quality vehicles, for less cost, at higher velocity. As a result, from 1962 to 1982, the Detroit auto industry lost 20% of its domestic market to Japan. The parallels between the auto industry of 35 years ago and software development practices in place today are remarkable. DevOps teams around the world are consuming billions of open source components and containerized applications to improve productivity at a massive scale. The good news: they are accelerating time to market and improving services. The bad news: adoption of these components is also increasing the risk of introducing vulnerabilities into agencies’ software supply chains. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 2017 State of the Software Supply Chain Report -- a blend of public and proprietary data with expert research and analysis. The presentation will also reveal findings from the 2017 DevSecOps Community survey where over 2,000 professionals shared their experiences blending DevOps and security practices together. Throughout the discussion, I will share lessons that Deming employed decades ago to help us accelerate adoption of the right DevSecOps culture, practices, and measures today.
Attendees in this session will learn: - What our analysis of 60,000 applications reveals about the quality and security of software built with open source components - How organizations like the U.S. Air Force, IRS,, Fannie Mae and the Department of Defense are utilizing the DevOps principles of software supply chain automation - How real-time risk management can be facilitated by employing automated support tools to execute various steps in the NIST’s Risk Management Framework. - How to balance the need for speed with quality and security -- early in the development lifecycle Attend this session and leverage the insights to understand how your agencies’ DevOps and Cybersecurity practices compare to others. We'll share the industry benchmarks to take back and discuss with your DevOps, development and security teams.
2:20 - 2:50 p.m. - Track 3 (Balcony C)
Build an edge-to-cloud IoT Solution
Rick Stewart, Michael Fitzurka | DLT Solutions
A demo of an intelligent IoT gateway that can provide real-time intelligence at the edge. Once the gateway is provisioned, we’ll put it into action by starting Red Hat JBoss Fuse and building and deploying the routing and business rules services. We’ll then start a sensor application that sends temperature data using MQTT to the broker, Red Hat JBoss A-MQ. These messages will be forwarded to the services that we started earlier. And finally, we’ll show the business rules to trigger the desired action when the sensor value reaches a threshold.
2:20 - 2:50 p.m. - Track 4 (Balcony D)
Accelerating Java Deployments with Containers
John Osborne - Senior Kubernetes Advocate | Red Hat
In this session technical session, John will explain the best way to run and package Java applications with Kubernetes including the following tips and best practices - Quickly deploy your applications with Maven - Leveraging the Obsidian community to quicklky spin up JEE and non-JEE workloads - Nuances of Linux Containers and Java Garbage Collection - Scaling Java Applications using OpenShift - Offloading stateful data in an in-memory data grid - Building a CI/CD pipeline around for faster deployments.
2:20 - 2:50 p.m. - Track 5 (Balcony E)
The Data Dichotomy: Rethinking Data & Services with Streams
Chris Matta - Systems Engineer | Confluent
Typically when we build service based apps, microservices, SOA and the like, we use REST or some RPC framework. But building such applications becomes tricky as they get larger, more complex and share more data. We can trace this trickiness back to a dichotomy that underlies the way systems interact: Data systems are designed to expose data, to make it freely accessible. But services, instead, focus on encapsulation. Restricting the data each service exposes. These two forces inevitably compete as such systems evolve. This talk will look at a different approach. One where a distributed log can be used to hold data that is shared between services. Then stateful stream processors are embedded right in each service, providing facilities for joining and reacting to the shared streams. The result is a very different way to architect and build service-based applications, but one with some unique benefits as we scale.
3:00 - 3:30 p.m. - Track 1 (Vista)
To Err is Human
Alex Liu - UI Platform Engineer | Netflix
3:00 - 3:30 p.m. - Track 2 (Balcony B)
Log Aggregation Patterns and Strategies with Containers
Doug Toppin - Sr Software Dev Engineer | Vizuri
Containerized applications can generate lots of logs with valuable information, but viewing the logs on a per container basis can quickly get out of control. There are various methods of accessing these container logs but it may be advantageous to integrate them into an existing log aggregation environment. This session will cover various approaches to effectively capture and forward log data.
3:00 - 3:30 p.m. - Track 3 (Balcony C)
A Catalyst for Changing Government IT
Michael Walker - Director, Red Hat Open Innovation Labs | Red Hat
You asked for a transformation catalyst: a place to meet with experts, experiment with the art of the possible, and spark the cultural change that enables DevOps and modern software development methods to thrive within the government space. So, we created Red Hat Open Innovation Labs. Labs is an immersive, residency-style consulting engagement that helps your organization jump-start modern application development and catalyze innovation. In a matter of weeks, alongside our experts, you’ll learn how to build applications the Red Hat way. Using open source technologies and principles, we’ll help you rapidly build prototypes, do DevOps, and adopt agile methodologies. You will walk away with a functioning prototype, and the methods, skills, and experience to drive transformation back within your teams. Open Innovation Labs is designed to accelerate the delivery of your innovative ideas. Ready to transform your organization? In this session, you'll gain an understanding of the importance of failing fast, and what's necessary to develop an approach to mitigate the cost and time associated with government projects gone wrong.
3:00 - 3:30 p.m. - Track 4 (Balcony D)
Deploying Self Healing Services With Kubernetes
Rob Scott - VP of Software Architecture | Spire Labs
Downtime can be both expensive and frustrating. In this session we’ll tell the story of how Kubernetes kept Spire services running through an AWS service disruption without any downtime. You’ll learn exactly what Kubernetes does behind the scenes to automatically redistribute systems. Diving deeper, we’ll cover some of the best practices for deploying highly available services with Kubernetes, including readiness probes, liveness probes, and affinity configuration.
3:00 - 3:30 p.m. - Track 5 (Balcony E)
The Critical Nature of Data Management in a Microservices World
Tariq Islam - Senior Solutions Architect | Red Hat
With microservices being the next best way to deploy workloads and applications, the complexity of deployments has increased as have the necessary organizational changes. To address this increasing complexity, respectively, we've seen the breathtaking rise of containers, container orchestration engines, and DevOps. One of the oft-overlooked aspects of this entirely new paradigm of development and operations is how data should be managed and provisioned. As the industry best-practice definition goes, each microservice should own its own data / data-store. What if you have hundreds of microservices? And what if your federal organization simply isn't structured to have small two-pizza teams per microservice to make that a feasible reality? In any and all cases, a hard look must be given to the different data strategies that suddenly must come to the forefront if we are to successfully engage in a microservices architecture for the benefit of an organization and its mission to deliver.
Have a Question?
We like to create things with fun, open-minded people. Feel free to say hello!
We'll respond as soon as possible.