Atlantic Business Technologies, Inc.

Category: DevOps

  • Advanced Pipeline Orchestration Methods

    Advanced Pipeline Orchestration Methods

    Using Microservices and Containers

    Microservices and containers have revolutionized the way CI/CD pipelines operate, improving agility and efficiency. By breaking down large applications into smaller, independent services, teams can work faster and with more flexibility. Containers package these services, ensuring consistency across various environments.

    Key Benefits of Microservices and Containers:

    BenefitDescription
    Better Fault HandlingIf one service fails, the rest of the application remains operational.
    Easier UpdatesYou can update or modify one microservice without affecting others.
    Faster DevelopmentTeams can build, test, and deploy smaller, independent components more quickly.

    Steps to Implement Microservices and Containers:

    1. Break down your application into smaller, manageable services.
    2. Select a containerization tool like Docker.
    3. Use orchestration tools like Kubernetes to manage your containers at scale.

    Working with Serverless Systems

    Serverless architectures, such as AWS Lambda, allow you to focus on writing code without worrying about managing infrastructure. This can lead to cost savings and faster deployment times.

    Best Practices for Serverless Systems:

    • Utilize frameworks like AWS SAM or Serverless Framework to simplify the creation and deployment of serverless applications.
    • Design your applications to respond to specific events (event-driven architecture).
    • Continuously monitor your serverless functions’ performance and optimize them for better efficiency.

    Managing Multi-Cloud Pipelines

    Multi-cloud environments introduce additional complexity into CI/CD pipelines, but they offer flexibility and resilience.

    Tips for Managing Multi-Cloud Pipelines:

    ApproachDescription
    Use Cloud-Neutral ToolsChoose tools like Terraform or Jenkins that work across multiple cloud providers.
    Unified PipelinesBuild a single CI/CD pipeline such as  bitbucket pipelines or Jenkins that can deploy across different cloud platforms.
    Monitor PerformanceImplement monitoring tools to track the performance of your pipeline across clouds.

    Applying AI and ML in Orchestration

    AI and machine learning are transforming CI/CD pipelines, helping to predict and prevent issues, optimize performance, and improve testing processes.

    How AI and ML Can Enhance Pipelines:

    • Error Prediction: AI can predict potential failures based on historical data.
    • Pipeline Optimization: Machine learning can automate performance tuning, making pipelines run faster.
    • Intelligent Testing: AI can identify high-risk areas of the application to prioritize testing.

    Getting Started with AI/ML in CI/CD:

    1. Collect pipeline performance data.
    2. Choose an AI platform like Google Cloud AI or AWS SageMaker.
    3. Develop models to automate and enhance your pipeline processes.

    Common Challenges in CI/CD Pipeline Orchestration

    CI/CD pipelines, while powerful, come with their own set of challenges. Here are common issues and how to address them:

    MistakeSolution
    Insufficient TestingImplement comprehensive testing: unit, integration, and end-to-end tests.
    Lack of MonitoringUse monitoring tools like Prometheus or Grafana to track pipeline health.
    Outdated DependenciesAutomate dependency updates using tools like Dependabot or Renovate.
    Resource InefficiencyOptimize resources with containerization, serverless services, or cloud resources.
    Manual ProcessesAutomate repetitive tasks, such as testing, building, and deploying.

    Handling Large-Scale Projects

    Managing large-scale projects can be daunting, but with the right strategies, you can break down the complexity.

    Strategies for Managing Large Projects:

    • Break the project into smaller, modular parts.
    • Design reusable pipelines to speed up development.
    • Run parallel tasks to save time and improve efficiency.
    • Use Git and other version control systems to track code changes effectively.

    Dealing with Complex Pipelines

    Complex pipelines require careful management to maintain efficiency and avoid bottlenecks.

    Simplifying Complex Pipelines:

    • Use pipeline visualization tools like Jenkins Blue Ocean to get a clear view of your pipeline flow.
    • Break down large pipelines into smaller, manageable segments.
    • Automate repetitive tasks and monitor pipeline performance regularly to identify areas for improvement.

    Evaluating CI/CD Pipeline Success

    Measuring the success of your CI/CD pipeline requires tracking specific key performance indicators (KPIs).

    Key Performance Indicators for CI/CD Pipelines:

    IndicatorDefinition
    Pipeline Success RateThe percentage of pipeline runs that complete without errors.
    Pipeline Failure RateThe frequency of pipeline failures or errors during execution.
    Average Pipeline DurationThe average time it takes for a pipeline to complete.
    Deployment FrequencyHow often code is deployed to production environments.
    Mean Time to Recovery (MTTR)The average time taken to fix an issue after a failure.

    Measuring Pipeline Efficiency

    Pipeline efficiency can be assessed by analyzing a few crucial metrics:

    MetricDescription
    Cycle TimeTime taken for new code to go live.
    Lead TimeThe time from ideation to delivery.
    ThroughputNumber of new features delivered over a specified time period.
    Work-in-Progress (WIP)The number of tasks currently being worked on in the pipeline.

    By tracking these metrics, you can identify bottlenecks and continuously improve your CI/CD process.

    Continuous Improvement of CI/CD Pipelines

    To maintain and improve your CI/CD pipeline over time:

    • Add more automated tests to catch issues early in the process.
    • Continuously monitor pipeline performance using tools like Datadog or New Relic.
    • Regularly review metrics and make data-driven decisions to optimize pipeline steps.
    • Foster a culture of collaboration by encouraging team members to suggest improvements.
    • Stay updated on the latest CI/CD tools and techniques to keep your pipeline cutting-edge.

    What’s Next for CI/CD Pipeline Orchestration?

    As the tech landscape evolves, new advancements in CI/CD pipeline orchestration are emerging.

    Upcoming Technologies Impacting CI/CD:

    TechnologyImpact on CI/CD
    AI and Machine LearningMakes pipelines smarter, automating error detection and optimization.
    Serverless ComputingReduces the need for managing infrastructure, enabling faster deployment.
    KubernetesEnhances the management of large, complex microservice architectures.
    DevSecOpsIntegrates security practices directly into the pipeline process.

    Future Changes in CI/CD Practices

    With the introduction of new technologies, CI/CD practices will continue to evolve:

    ChangeWhat It Means
    More Complex PipelinesPipelines will incorporate multiple branching paths and conditional logic.
    Enhanced SecurityIncreased focus on integrating security throughout the CI/CD process.
    Stronger CollaborationDev, QA, and Ops teams will work more closely for faster, seamless deployments.
    Increased AutomationMachine learning and AI will further automate testing and deployment tasks.

    How does Atlantic BT fit in this picture?

    We empower clients to leverage their existing tools like Bitbucket, Packer, and Terraform, augmented by cutting-edge AI, to create advanced CI/CD pipelines. Our approach optimizes deployment across multi-cloud environments by combining automation with intelligent insights for continuous improvement.

    Key Features:

    1. Microservices & Containers: We break down monolithic applications into modular microservices using containers (Docker) to ensure scalability and fault tolerance. Packer automates the creation of machine images while Bitbucket Pipelines handles efficient code integration and testing.
    2. Infrastructure as Code (IaC): Using Terraform, we automate multi-cloud infrastructure provisioning, ensuring a consistent and repeatable deployment process across AWS, Azure, or Google Cloud.
    3. AI-Powered Automation: We integrate AI to predict errors and optimize performance by analyzing historical pipeline data. Machine learning models help improve test prioritization and auto-tune deployment times, making pipelines faster and more reliable.
    4. Smart Monitoring & Security: With AI-driven monitoring, we detect potential issues in real-time, providing actionable insights for resource optimization. Additionally, DevSecOps practices are embedded directly into the pipeline, ensuring security at every stage of the development lifecycle.

    By leveraging your current tools and integrating AI, AtlanticBT helps clients achieve faster, smarter, and more resilient CI/CD pipelines, driving efficiency and agility in their software development workflows.

  • AWS Certified Cloud Practitioner Journey with AWS She Builds

    AWS Certified Cloud Practitioner Journey with AWS She Builds

    The Certified Cloud Practitioner journey is about as simple as you allow it. The AWS She Builds program is strictly for women or individuals who identify as women. However, this post will discuss the free resources offered in a non-structural format, allowing all individuals interested in getting certified to learn more. Most importantly, studying for 1-3 months and taking the exam while it’s fresh in your mind is recommended.

    The initial sign-up process for AWS She Builds was simple. There are individual cohorts that span eight weeks. To clarify, you’re not limited to those eight weeks as it’s self-paced. Therefore, taking this journey with a full-time job is possible. However, they request that you get through the process within a specific time frame to get a free voucher ($100 USD value). There is a requirement to attend the initial onboarding meeting. After that, they will send an email, including an invite to Slack, and your journey will begin! They offer near-immediate responses to questions within the Slack channels. In addition, previous cohorts and other women who are more experienced can also provide guidance.

    The next cohort starts on August 25th (2022):

    AWS She Builds – SkillUp with CloudUp – Cloud Practitioner

    What is AWS She Builds

    • Flexible 8-week program for individuals who identify as women only.
    • Community-driven learning environment.
    • Utilizes AWS Skillbuilder for structure.
    • It provides a baseline structure to keep you on track.
      • A weekly guided academic-based flow with modules and dates to help you stay on track.
    • Live training sessions and webinars.
    • Study groups within Slack that are available online across the globe.
    • Resources from external sources are provided for extra practice.
    • Weekly live Q&A sessions for any individuals who feel ‘stuck’ or just want to listen.

    Why is AWS She Builds just for women?

    • AWS She Builds desires to bridge the gender gap, pay discrepancies, and lack of diversity in IT teams.
    • Allows women to bond and come together to discuss tech, leadership, and network for career expansion.

    What about others?

    The resources used for AWS She Builds are available for free for others. The caveat to that is that you cannot participate in live webinars or Q&A sessions or get the detailed structure paths the mentorship provides.

    Resources to get certified for Cloud Practioner:

    AWS Training and Certification Skill Builder

    Digital Cloud Cloud Practitioner Cheat Sheet

    AWS Educate

    AWS Power Hour

    Cloud Practitioner YouTube Playlist

  • Maintaining a Website is like Owning a Car:  An Analogy to Help Website Maintenance Make Sense

    Maintaining a Website is like Owning a Car: An Analogy to Help Website Maintenance Make Sense

    In the previous post, I discussed how the planning and development of a custom website can be similar to having a custom house designed and built. The analogy works in many ways, particularly in the planning and designing phases, but it’s not a perfect analogy.

    In this post, I’ll correlate website maintenance and ongoing development with owning a car. Keeping an application “purring like a kitten” takes effort, and I also won’t leave out aspects of digital upkeep that just don’t make sense with car terminology.

    When you’re done, the last post in the series compares the experience with your vendor to a fine dining experience.

    Keeping Fuel in the Tank

    Visiting the gas station is probably the most ubiquitous aspect of owning a car, unless the sole purpose of the vehicle is keeping it on a pedestal for viewing only. With a website, the same case can be made for hosting. Server or cloud hosting is foundational for a site to be usable. And similar to a car that may be driven more or less in a month, hosting costs can fluctuate – particularly from high or unexpected amounts of traffic.  (Although we want web traffic, we don’t want car traffic.)

    With a website that your users (or future customers) depend on, someone needs to be monitoring that hosting. A common offering these days is the Managed Service Provider (MSP). Picture this as a dedicated fueling crew, keeping the gas and fluids topped off. With the monitoring and ownership of an MSP, it could be said they also maintain and protect your garage and driveway, which can be compared to how your site reaches the rest of the internet.

    Maintenance Every 5,000 Users or 3 Months, Whichever Comes First

    No, not really every 5,000 users. But maintenance should be performed at least every 3 months. You may fuel your car regularly, but you also need oil changes and other regular maintenance items. Hosting on its own isn’t enough. 

    Code libraries and software versions need updates. If your site uses a Content Management System, those platforms also require regular updates. Code needs love just like your car. Your team of mechanics (development team) should perform these regularly.

    Sometimes these updates can cause small unexpected changes to the site, and fixing those are usually part of ongoing maintenance. It could be considered professional detailing whenever you bring in your site for maintenance. Keep it shiny! If you don’t perform regular updates, issues can build and compound. This is what we refer to as technical debt

    But Wait, I Want Something New On My Site

    We haven’t even gotten to new development yet – it’s all been upkeep. Yup, that’s on purpose. Would you spend money to upgrade a car you don’t even bring in for maintenance? I doubt it. New features don’t correlate to owning a car quite as well – but maybe at some point you want a custom paint job or to upgrade the console. 

    In my experience, it’s pretty uncommon that someone builds and launches a brand new site and then never wants anything to change. The process of a new feature for your car or your website is roughly the same: you talk to someone about what you want, provide some detail, get an estimate or quote, you approve the work, the work is completed, and you’re good to go. Where does website development beat out vehicle improvement? You don’t have to drop your website off and rely on another one while the work is being done.

    Lifespan & Resale Value

    Just like a car, your web application will have a longer lifespan if it’s better maintained. You might drive daily and reliably for 8 years, but if you never take it to the mechanic, the life span of your car is going to be shorter. If you let technical debt build and build, there can be a point that it would be cheaper to rebuild a new site than upgrade and fix the current one. There may not be “resale value” in a web application, but in five to ten years when you’re ready for a new one, the cost of building a new one will be lower if it’s been cared for. 

    I’ve known people who drove cars to the absolute maximum end of a car’s lifespan, spending as little money as possible on maintenance and upkeep. And yes, that’s an option with a website. However, as a website that presents your brand, brings in customers, and is possibly part of your employee’s workflow – it’s not going to end well.

    Where The Car Analogy Screeches to a Halt

    As you can see, there are a lot of similarities between owning a website and a car. But not everything fits.

    The most important difference? A website isn’t a product. There’s a reason we’re not discussing how buying a car is like building a site – it’s not. Almost all cars are manufactured, with certain options to choose from, in very specific configurations. A website simply isn’t a thing you buy, it’s really a service that you (with a development crew) build.

    Another difference is with a website, new technologies (or just time) can uncover vulnerabilities in code. This is why updates to libraries are important, as they’re where those vulnerabilities get fixed. Every time you bring your car in for an oil change, you’re not also changing the locks and reinforcing the windows. There typically aren’t new advances in car theft that you need maintenance every few months to prevent. However, even this is changing given the amount of digital components in vehicles, such as recent stories of key fobs being maliciously replicated.

    Problems or bugs will occur on a website. It’s reasonable to assume or hope a car won’t have any problems or recalls for the first couple years. Bugs will happen on a website. I promise. With a car, there’s typically a single operator (driver). On a site? Thousands of visitors with unique combinations of browsers, operation systems, hardware, and digital savviness are using your application. It’s impossible to guarantee every aspect of a site will work all of the time for everyone.

    There’s also not a good vehicle comparison to deployments. Deployments are when a site is first launched, but also any time code or hosting changes are made. Possibly driving the car off the lot is equivalent to the initial deployment of a new website, but other than that? I suppose when you leave the mechanic with your car in top shape – but it’s not a metaphor that feels right.

    There’s plenty of other aspects that don’t overlap between cars and websites. Website hacking, car insurance, and your friend borrowing your car – it doesn’t all match up. 

    Finishing The Trilogy

    I hope this article and the others in the series help to provide you with better context in your next web site or application project. To wrap up the series I’m going to dive more into the experience of working with a partner when building a website. 

    How Building a Custom Website Is (and Isn’t) like… A Fine Dining Experience (3 of 3 in Series).

  • Deploying a .NET Core API to AWS Lambda & API Gateway using Bitbucket

    Deploying a .NET Core API to AWS Lambda & API Gateway using Bitbucket

    After a transition to Bitbucket for our repository hosting last year, we’ve been setting up more and more CI/CD using Bitbucket Pipelines. I recently converted a .NET Core API project that was deploying to an auto-scaling group of EC2s into a project that could be deployed in AWS Lambda behind API Gateway using Bitbucket. The web project that consumes this API is written in Angular.

    As a shop that leverages automated deployments in multiple environments, I found documentation on the web to be in short supply other than very basic “deploy using VS Studio” articles.

    As part of updating the API project to Lambda, I did make my LambdaEntryPoint reference APIGatewayHttpApiV2ProxyFunction and the serverless template event of type HttpApi. A guide to these updates can found here: One Month Update to .NET Core 3.1 Lambda

    In this post, I provide some snippets of YAML code which I found out in the world that didn’t work, and also what ultimately did.

    Jump to What Worked

    What Didn’t Work

    aws-sam-deploy

    caches:
      - dotnetcore
    steps:
      - export PROJECT_NAME=this-dotnet-project
      - dotnet restore
      - dotnet build $PROJECT_NAME
      - pipe: atlassian/aws-sam-deploy:1.5.0

    When trying to use the aws-sam-deploy pipe, I wasn’t able to leverage enough options or get the API to run the .NET code successfully. The API Gateway endpoint was running and hitting Lambda, but I was getting system errors I just couldn’t resolve.

    Using project appsettings files

    Since appsettings.json files contains secrets, we don’t check them into the repo. At some point I was receiving these errors, and I realized that the appsettings files weren’t getting deployed correctly.

    run_dotnet(dotnet_path, &args) failed
    
    Could not find the required 'this-dotnet-project.deps.json'. This file should be present at the root of the deployment package.: LambdaException

    We ended up injecting the appsettings content using the AWS Parameter Store with aws ssm get-parameter.

    dotnet publish, zip, and aws-lambda-deploy

    - apt-get update && apt-get install --yes zip
    - dotnet restore
    - dotnet publish ${API_PROJECT_NAME} --output "./publish"  --framework "netcoreapp3.1" /p:GenerateRuntimeConfigurationFiles=true --runtime linux-x64 --self-contained false
    - curl -o /bin/jp -L https://github.com/jmespath/jp/releases/download/0.1.3/jp-linux-amd64 && chmod a+x /bin/jp
    - aws ssm get-parameter --name "/this-project/api/dev/appsettings" --with-decryption --region us-east-1 | /bin/jp -u "Parameter.Value" | base64 -d > ./publish/appsettings.json
    - zip -r -j package.zip publish/*         
    - pipe: atlassian/aws-lambda-deploy:1.5.0
      variables:
         AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
         AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
         AWS_DEFAULT_REGION: ${AWS_REGION}
         FUNCTION_NAME: "this-dotnet-project-AspNetCoreFunction-LZj5pbvV0GRT"
         COMMAND: "update"
         ZIP_FILE: "$BITBUCKET_CLONE_DIR/package.zip"

    We have some single Lambda functions (not behind API Gateway) that use this method for deploying. It works great. I tried using this method pushing to a function that was built with a stack published via VS Studio. No luck. It’s possible there was a problem with the stack that was built, but I think this package wasn’t exactly right.

    What Works

    The following is the pipeline for a single branch, our develop branch. I haven’t yet refactored using template steps, but this is easier to read through for this article anyway.

    I have scrubbed the contents of this YAML, but the repo contains:

    • Root (.sln file)
      • .Net Core API project directory (named this-dotnet-project here)
      • Angular web project directory (named this-web-project here)
    pipelines:  
      branches:
        develop:
          - step:
              name: API (.Net Core) Build & Deploy 
              image: mcr.microsoft.com/dotnet/core/sdk:3.1
              deployment: Develop
              script:
              - apt-get update && apt-get install -y zip && apt-get install -y awscli
              - dotnet tool install -g Amazon.Lambda.Tools
              - export PATH="$PATH:/root/.dotnet/tools"
              - curl -o /bin/jp -L https://github.com/jmespath/jp/releases/download/0.1.3/jp-linux-amd64 && chmod a+x /bin/jp
              - aws ssm get-parameter --name "/this-project/api/dev/appsettings" --with-decryption --region us-east-1 | /bin/jp -u "Parameter.Value" | base64 -d > ./this-dotnet-project/appsettings.json
              - cd this-dotnet-project/
              - dotnet lambda deploy-serverless --aws-access-key-id ${AWS_ACCESS_KEY_ID} --aws-secret-key ${AWS_SECRET_ACCESS_KEY} --region ${AWS_REGION} --configuration "Development" --framework "netcoreapp3.1" --runtime linux-x64 --s3-bucket $API_S3_BUCKET --stack-name $API_STACK_NAME --stack-wait true
          - step:
              name: Web (Angular) Build
              image: atlassian/default-image:2
              caches:
              - node
              script:
    	      - cd this-web-project #The angular project is currently in a subfolder in the same repo
              - nvm install 12
              - nvm use 12
              - npm install @angular/cli
              - npm run build:dev
              artifacts: # defining the artifacts to be passed to each future step.
              - dist/**
          - step:
              name: Web (Angular) Deploy
              script:
              - pipe: atlassian/aws-s3-deploy:0.5.0
                variables:
                  AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} 
                  AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
                  AWS_DEFAULT_REGION: ${AWS_REGION}
                  S3_BUCKET: ${WEB_S3_BUCKET_DEV}
                  LOCAL_PATH: "this-web-project/dist/this-project-output-path/"

    apt-get install and dotnet tool install

    Items that were quickly apparent as missing before adding them, more of a “duh” leaving them out when trying so many things.

    dotnet lambda deploy-serverless

    This was the big command that mostly got things working. I finally found this is effectively what’s happening when you deploy the API project from VS Studio.

    –stack-wait true

    In Bitbucket, without this, the build shows as successful when the stack build is kicked off. By adding this flag, bitbucket will wait for the full build or update before continuing.

  • Atlantic BT’s Jon Karnofsky (JonK) to present on cross-functional teams, planning, and empathy.

    Atlantic BT’s Jon Karnofsky (JonK) to present on cross-functional teams, planning, and empathy.

    We are excited to announce that Atlantic BT’s own Director of Operations, Jon Karnofsky, is scheduled to speak at Atlassian’s Team Tour: the Series event on May 11th at 11:00am PDT.

    Atlassian is a global software company dedicated to creating amazing products, practices, and open work for all teams. You’ve likely heard of their software development and collaboration tools like Jira, Confluence, Bitbucket, and Trello.

    This free virtual conference will cover teamwork trends, expert insights, and actionable ways to implement change. Get hacks for maximizing Atlassian products and see how other companies are using their tools to drive long-term success.

    In JonK’s session, he will discuss how we reorganized into cross-functional teams, the  benefits and challenges of moving to teams, and how planning and empathy can be used in organizational change.

    Be sure to check it out on May 11th – Reorganizing into cross-functional teams takes smarts and heart: here’s how we did it.

  • A Look Inside Atlantic BT’s DevOps Process

    In order to deliver robust solutions to clients; code must be robust, reliable, scalable, maintainable, and secure. This level of quality only be achieved through building a solid software development process throughout the Software Development Life Cycle (SDLC).

    The Benefits of DevOps Methodology

    DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.

    Atlantic BT adopted DevOps methodology because we saw the following benefits, both tangible and intangible, to our ability to deliver quality solutions to our clients:

    Tangible Benefits

    • Shorter development cycle
    • Increased release velocity
    • Improved defect detection
    • Reduced deployment failures and rollbacks
    • Reduced time to recover upon failure

    Intangible Benefits

    • Increased communication and collaboration
    • Improved ability to research and innovate
    • Promotion of a performance-oriented culture

    How will working with a DevOps partner benefit me?

    You can benefit from partnering with a company that follows DevOps practices in the following ways:

    • Faster delivery of features
    • More stable operating environments
    • More time available to add value (rather than fix/maintain existing features)

    DevOps Process Chain

    Because DevOps is a cultural shift and collaboration between development, operations, and testing; DevOps focuses on process and approach.

    Atlantic BT takes the following steps in our DevOps process for software development and delivery:

    • Code – Conduct code development and review, version control tools, and code merging
    • Build – Implement continuous integration tools and build status
    • Test – Test results to measure performance
    • Package – Create artifact repository and application pre-deployment staging
    • Release (Deploy) – Set up change management, release approvals and release automation
    • Configure – Implement infrastructure configuration and management, as well as Infrastructure as Code tools
    • Telemetry – Implement application performance monitoring and end user experience measurements

    Elements of Atlantic BT’s DevOps Process

    Automation with Jenkins

    Because automation is an important part of DevOps, your tool set is essential. Atlantic BT’s primary Continuous Integration (CI) tool is Jenkins automation server. Jenkins is an extensible, cross-platform, continuous integration and delivery automation server for open source projects.

    Jenkins supports version control systems like Git, making it easier for developers to integrate changes to the project and for users to obtain a fresh build. It also allows us to define build pipelines and integrate with other testing and deployment technologies.

    Automated Testing

    We have a dedicated QA department and include QA time as part of the development plan as a best practice. As a minimum baseline, we evaluate the platform using unit and functional testing.

    Our Continuous Integration tools perform the following key test elements:

    • Unit Test validation
    • Integration Test validation
    • Code analysis
    • Functional Tests

    Once sections of an application have been QA’d through unit and functional tests, automated tests can be developed for ongoing quality assurance.

    Infrastructure-as-Code Approach

    ABT optimizes cloud architecture for maximum reliability and scalability while maintaining security. We take an infrastructure-as-code approach, scripting all instance builds so they can be automated—and thus reliably replicated—in the production process.

    The ability to reliably configure and stand up server instances is critical, as most complex projects require many servers of different configurations at different stages of the project to accommodate development, testing, migration, and production needs. This approach also facilitates Disaster Recovery planning and implementation.

    Monitoring, Metrics, and Alerting

    Understanding the importance of metrics, we maintain a fully-staffed NOC that monitors key performance parameters and alerts 24/7/365. We take responsibility for monitoring application and infrastructure health, including:

    • Application availability and response time
    • CPU, Memory and Disk
    • Throughput
    • Http response codes
    • DB Connections

    Metrics for applications hosted on Amazon are collected in AWS Cloudwatch; others are determined as appropriate by hosting method.

    DevOps and AWS

    Atlantic BT’s AWS partnership enables us to fully tap into their set of flexible services, which are designed to empower companies to deliver products using DevOps practices. These services simplify provisioning and managing infrastructure, deploying application code, automating software release processes, and monitoring application and infrastructure performance.

    AWS Command Line Interface

    As part of using the AWS console, advanced website developers can manage their websites via command line management tools like the AWS Command Line Interface (CLI). CLI is a unified tool to manage AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

    The AWS CLI has over 140 simple file commands for making efficient calls to AWS services.

    CI/CD Pipeline on AWS

    CI/CD Pipeline on AWS allows you to automate your software delivery process, such as initiating automatic builds and deploying to Amazon EC2 instances. AWS CodePipeline will build, test, and deploy your code every time there is a code change. Use this tool to orchestrate each step in your release process.

    Other Amazon Tools

    Other Amazon tools we use include:

    • Amazon API Gateway: a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
    • AWS CloudTrail: a web service that records AWS API calls for your account and delivers log files to you.
    • AWS CodePipeline: a service that builds, tests, and deploys your code every time there is a code change, based on the release process models you define.
    • AWS Identity Access Management: manages access, where you can specify which user can perform which action on a pipeline
    • Amazon CloudFront Reports and Analytics: offers a variety of solutions including detailed cache statistics reports, monitoring your CloudFront usage, getting a list of popular objects, and setting near real-time alarms on operational metrics.

    Start Implementing DevOps Today

    Ultimately, implementing DevOps evolves products faster than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.

    If you’re interested in getting help implementing DevOps or looking for a software development partner that follows best practices, contact us to learn more.