Atlantic Business Technologies, Inc.

Author: Jon Karnofsky

  • Magento Middleware: Mulesoft Anypoint Pros & Cons.

    Mulesoft’s Anypoint platform can be a reliable choice for middleware. With its maturity in the integration space, combined API and IPaaS functionality, and simplicity to use, we consider it a contender against Red Hat Fuse and Amazon Eventbridge. 

    Anypoint is truly ready-to-go with minimal coding needed to expose and connect existing APIs.  Mulesoft offers multi-cloud support, security support, multi-tenancy (to appropriately attribute costs to different business units).  

    The API offering allows companies to create API endpoints to all their connected systems with robust monitoring and configurations. This allows for robust and secure access to internal or external systems.

    Mulesoft is also a thought leader in the integration space and has underpinned its Anypoint platform with machine learning that can discover reusable artifacts, flows, connectors, and data from within the platform.

    Mulesoft, however, is more expensive than other solutions. The entry-level cost is around  $100,000 per year , and they typically look for three- or five-year contracts. Their sales force is informed, technical, helpful, and stays heavily engaged during contract negotiations (Mulesoft is owned by Salesforce).

    Analysis of key features.

    Ease of Use

    Mulesoft is extremely easy to use, with a desktop-integrated development environment (IDE) in Anypoint Studio. This IDE allows developers to get right to work in interacting with the Anypoint platform. Mulesoft is also continually offering simple-to-use integration tools, like flow designer, to their toolset. These interface-based tools give developers and IT professionals a quick way to start to integrate services in Anypoint. 

    Community Support

    There is a large and highly engaged community around Mulesoft, boasting 42,000 members. There are regular conferences, meetups, community-driven Q&A  sites.

    Ability to Scale

    Mulesoft’s iPaaS is running on top of Amazon Web Services with flexibility baked into the pricing model – as they sell Anypoint based on CPU consumption. The platform has been built to be distributed, not just a monolith – Mulesoft says their platform will run on a raspberry-pi or any cloud provider. 

    Mulesoft’s active-active clustering does, however, require a Platinum Support subscription, which claims to be infinitely scalable. Their ETL is very mature as they have distilled down complex features into a simple interface that requires little knowledge of the backend code necessary to make the data transformations.

    Security

    Anypoint’s security is top-notch and meets or exceeds most security frameworks to include PCI DSS, SOC 2, and HIPAA. They use a shared security model where the end customer is responsible for using the tools they provide to secure the data that passes through their system, while they maintain the security of the system. 

    Total cost of ownership.

    Mulesoft is easy enough to set up, and there is a large library of “Anypoint Connectors” written by Mulesoft and 3rd party partners. However, when testing, ABT found a lot of common ERP systems were not accounted for in their connector library. 

    However, their IDE would make the development and testing of connectors a bit faster than other platforms. We believe the complete setup and integration of Mulesoft with the standard eCommerce requirements to an ERP without an existing connector would take between 100-125 hours.

    License evaluation, cost flexibility, and vendor lock-in analysis.

    • License Evaluation – A two-week trial period is available with Mulesoft. While that period can be extended several times, their sales process is aggressive. As a fully commercial offering that is considered an enterprise product and a leader in the space, they do command a premium price after the evaluation period.
    • Cost Flexibility – Mulesoft does not provide as much cost flexibility relative to the other contenders. Mulesoft requires a minimum annual contract  and will push for a multi-year deal. 

    Mulesoft has a “usage factor” but it isn’t a strictly pay-for-what -you-use model. Instead, it is a combination of low average yearly base cost, plus an added expense that ramps up as time goes on. 

    • Vendor Lock-In Analysis – In general an integration tool such as API manager or IPaaS are not nearly as sticky as something like choosing an ERP, CRM, etc. By nature this integration layer can be plugged and unplugged. 

    For the initial integration requirements, any middleware/IPaaS can be replaced without too much trouble, but the connectors and configurations need to be redone. We don’t see any major benefit to one contender vs. another for vendor lock-in at a technical level but you can expect a typical annual or multi-year agreement with Mulesoft so that reduces the score.

    Technical commentary.

    The trend in software development is toward a microservices-based architecture. Rather than building monolithic software that is built and deployed together with interdependency woven in, building with a collection of microservices is based on the premise that small services with a narrow purpose provide the best scalability, resilience, and flexibility. 

    Mulesoft’s approach is to provide an enterprise-level offering of these services, and not provide a suite of individual a-la-carte services. Even though Mulesoft by its nature supports the microservices concept, their offering is considered by some  a monolith at least in terms of how it’s licensed and consumed. 

    Need help choosing the best middleware platform?

    If you are interested in using middleware as an integrations solution, or if you are looking for any help choosing the right technology, our team is here to help. Contact us for a free consultation. We have a proven research process to evaluate options

    [general_cta subtitle=”Ready to get started?” title=”Get in Touch for a Free Consultation” button_text=”Contact Us” url=”/contact/” type=”button button–primary”]

  • Shared Google Authorization with an Angular site and .Net Core API.

    Shared Google Authorization with an Angular site and .Net Core API.

    There are many Angular tutorials for setting up websites using the Angular framework and .NET Core APIs. Likewise, there are many walkthroughs for integrating Google authentication with each. However, implementing these solutions separately yield the need to authenticate through Google twice, once for the angular site, once for the API.

    This article provides a solution that allows shared Google authorization through authentication on the angular site. To surpass the need to authenticate a second time, pass the token through a standard header to the API and use Google libraries to validate and authorize.

    Technology used in this Angular tutorial.

    This post assumes you’ve got the basic angular website and Web API projects running. This post will also likely be effective for any angular site 2+ or front end site where google authentication occurs. It should also work if your Web API project is Core 2+.

    The site I’m working with is designed to be exclusively authenticated through Google, however this method could be extended to handle multiple authentication formats (assuming there are .Net validation libraries for them or you write your own). Therefore, one other aspect to mention is that I am not storing any user data in a database.

    Using the Angular site, Google login, and local storage as a start.

    The primary goal is to make sure you have access to Google’s idToken after authentication. Using the angular-social-login default setup is pretty simple to get working. This is a pretty good article which also walks through setting up the Google App as part of this if you need. I can’t find the original post I followed, but this stackoverflow post shows storing the Google user/token in state for future calls.

    This code block (customauth.service.ts in the Angular site) just shows that on user subscription the user is stored in local storage:

      constructor(
        public authService: AuthService,
        public router: Router,
        public ngZone: NgZone // NgZone service to remove outside scope warning
      ) {
        // Setting logged in user in localstorage else null
        this.authService.authState.subscribe(user => {
          if (user) {
            this.userData = user;
            localStorage.setItem('user', JSON.stringify(this.userData));
            JSON.parse(localStorage.getItem('user'));
          } else {
            localStorage.setItem('user', null);
            JSON.parse(localStorage.getItem('user'));
          }
        });
      }
    
      // Sign in with Google
      GoogleAuth() {
        return this.authService.signIn(GoogleLoginProvider.PROVIDER_ID);
      }
    
      // Sign out
      SignOut() {
        return this.authService.signOut().then(() => {
          localStorage.removeItem('user');
          this.router.navigate(['/']);
        });
      }

    Options researched before finding the current solution.

    • The Microsoft standard way to handle google authentication. This is slick if you’re building an MVC site and need to allow Google auth, but I couldn’t find a way to allow sending over the token, as this generates and uses a cookie value with a Identity.External key.
    • JWT authorization is an option, but the tutorials got heavy quickly. Since I don’t need to store users or use Microsoft Identity, I blew past this.
    • A custom policy provider is another Microsft standard practice. There might be a better way to accomplish the solution using this approach, but I didn’t walk this path too far since I wasn’t using authentication through the .Net solution.

    The solution: a .Net Core custom authorize attribute.

    I used this stackoverflow post about custom auth attributes to hook up the solution. This is what allows the shared Google authorization using a standard authorization request header.

    Approach

    1. In Angular
      1. Build the Authorization header using the Google idToken.
      2. Pass the header for any authorize only API endpoints.
    2. In the web API
      1. Enable authorization
      2. Create a custom IAuthorizationFilter and TypeFilterAttribute
      3. Tag any controllers or endpoints with the custom attribute

    I provide code samples for these steps below.

    Angular API calls with an authorization header.

    The code in the api service (api.service.ts in Angular Site) grabs the id token from the user in local storage and passes it through the API call. If the user is logged out, this header isn’t passed.

    import { Injectable } from '@angular/core';
    import { HttpClient, HttpHeaders } from '@angular/common/http';
    import { SocialUser } from 'angularx-social-login';
    import { environment } from './../../environments/environment';
    
    export class ApiService {
      apiURL = environment.apiUrl;
      user: SocialUser;
      defaultHeaders: HttpHeaders;
    
      constructor(private httpClient: HttpClient) {
        this.user = JSON.parse(localStorage.getItem('user'));
        this.defaultHeaders = new HttpHeaders();
        this.defaultHeaders = this.defaultHeaders.append('Content-Type', 'application/json');
        if (this.user != null) {
          this.defaultHeaders = this.defaultHeaders.append('Authorization', 'Bearer ' + this.user.idToken);
        }
      }
    
      public getAccounts() {
        const accounts = this.httpClient.get<Account[]>(`${this.apiURL}/accounts`, { headers: this.defaultHeaders });
        return accounts;
      }
    }

    Enabling authorization in the .Net Core project.

    In the StartUp file (StartUp.cs in the API project), authorization has to be enabled.

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
      ...
      app.UseRouting();
      app.UseAuthorization();
      app.UseEndpoints(endpoints =>
      {
          endpoints.MapControllers();
      });
    }

    The custom filter attribute to validate without another authorization.

    This creates the attribute used for authorization and performs a Google validation on the token.

    This application is used only for our Google G Suite users, and thus the “HostedDomain” option of the ValidationSettings is set. This isn’t necessary, and I believe can just be removed if you allow any Google user to authenticate.

    I’ve named this file GoogleAuthorizationFilter.cs in the API project.

    using Google.Apis.Auth;
    using Microsoft.AspNetCore.Mvc;
    using Microsoft.AspNetCore.Mvc.Filters;
    using System;
    
    namespace YourNamespace.API.Attributes
    {
        /// <summary>
        /// Custom Google Authentication authorize attribute which validates the bearer token.
        /// </summary>
        public class GoogleAuthorizeAttribute : TypeFilterAttribute
        {
            public GoogleAuthorizeAttribute() : base(typeof(GoogleAuthorizeFilter)) { }
        }
    
    
        public class GoogleAuthorizeFilter : IAuthorizationFilter
        {
    
            public GoogleAuthorizeFilter()
            {
            }
    
            public void OnAuthorization(AuthorizationFilterContext context)
            {
                try
                {
                    // Verify Authorization header exists
                    var headers = context.HttpContext.Request.Headers;
                    if (!headers.ContainsKey("Authorization"))
                    {
                        context.Result = new ForbidResult();
                    }
                    var authHeader = headers["Authorization"].ToString();
    
                    // Verify authorization header starts with bearer and has a token
                    if (!authHeader.StartsWith("Bearer ") && authHeader.Length > 7)
                    {
                        context.Result = new ForbidResult();
                    }
    
                    // Grab the token and verify through google. If verification fails, and exception will be thrown.
                    var token = authHeader.Remove(0, 7);
                    var validated = GoogleJsonWebSignature.ValidateAsync(token, new GoogleJsonWebSignature.ValidationSettings()
                    {
                        HostedDomain = "yourdomain.com",
                    }).Result;
                }
                catch (Exception)
                {
                    context.Result = new ForbidResult();
                }
            }
        }
    }
    

    Putting the custom attribute in place.

    This is just a snippet of code, as on your controllers you just have to add the one line of code (well, two including the using statement). If the GoogleAuthorize doesn’t validate, the call returns as access denied.

    using YourNamespace.API.Attributes;
    
    [GoogleAuthorize]
    [ApiController]
    public class AccountsController : BaseController {
    

    Voila! No need for a second authentication.

    The .Net API is now locked down only to requests originating from a site with Google authentication. The custom attribute can be extended for additional authentication sources or any other desired restrictions using the request. I like the simplicity of a site which allows Google auth only, but it wouldn’t be a stretch to add others – and I really like not managing any users or passwords. I hope this Angular tutorial for shared Google authentication works well for you too!

  • Moving to the Cloud: What, Why, and How to Get Started

    The cloud. Wherever you go, you hear about “the cloud” and phrases like “cloud management” or “put it up on the cloud.” But what is the cloud? And what does moving to the cloud actually mean for your business? Perhaps you work for (or maybe even run) a company which has an old website you’re looking to update. Or maybe your business depends on an application built five to ten years ago and it’s holding back growth. If so, the cloud could be a big help.

    In this post, we’ll give you a brief summary of what the cloud is, and explain why moving your application to the cloud may save you money—or your entire business.

    The Cloud: What Is It, Exactly?

    Here’s a secret—“the cloud” is really just “the Internet.” It’s a bunch of connected computers communicating with each other on which web sites, web applications, and web storage can run. Think of it as all the computers you could connect to over the Internet that aren’t other people’s home machines.

    Why did someone invent this new term? Well, because of the sheer amount of computers that exist now, a huge percent of which are dedicated for Internet-based use. The name Internet came from a visual image of these countless connected computers hooked together like a fishing or basketball net. If you needed to, you could locate and count each place in such a net where two pieces of cord tied together.  Now picture a small puffy cloud in the sky.  Now try and consider how many drops of water are in there.  That’s probably how many computers are now connected on the Internet.

    Now that you have a sense of what the cloud is, let’s discuss three benefits of moving your site or application to the cloud: scalability, disaster recovery, and SaaS applications.

    Scalability: Getting More (For Less)

    What This Is: When you hear someone say “scalability,” it means this: a web application which used to only run on one machine can now run on two, three, 27, etc., all at the same time.  Most websites of olden days (and still many today) ran on a single computer.  A “scalable” web site or application is designed to run smoothly no matter how many physical computers it’s copied to.

    Why This Matters: Does your website experience random peak times when the number of visitors rapidly escalates? For example, a news website may have an average daily amount of traffic. But if a major story breaks, the traffic number could be ten times that of the average. If your infrastructure doesn’t have the capacity to handle this kind of surge, these peak periods could crash your website.

    That said, you don’t want to buy lots of physical IT equipment in preparation for peak usage. Imagine your website was a restaurant – you might need 20 people on staff Friday nights, but only ten on Wednesday lunch.  If you kept a maximum staff all the time, you’d be paying a lot of employees to stand around “just in case.”  In the same way, why pay for a huge web server all of the time if you know three days of the month are the busiest?

    How the Cloud Helps: Cloud scalability solves this by allowing an application to run on a smaller machine for normal use, and more machines can be turned on automatically. For example, an Amazon Web Services cloud lets you schedule when machines turn on or off. AWS can also change capacity automatically based on how much usage a website is getting. Either way, this translates directly into lower costs and prevents your website from being overloaded.

    Disaster Recovery: Better Safe than Sorry

    What This Is: Sometimes, computers just die. “Disaster Recovery” means you avoid losing data and/or business because part of your IT infrastructure fails.  

    Why This Matters: What happens when the one machine that handles orders from your customers up and dies? If this disaster strikes and you’re not prepared, that web application (and thus a major part of your business) is toast. Also, if you rely on a single web server in a data warehouse, and it fails, your website will be offline while your IT department rebuilds that server.

    How the Cloud Helps: In the cloud, the physical machine your application runs on doesn’t matter.  Your web site, the operating system it runs on, and all the magic your IT gurus have set up are all configured and saved. If the machine hosting your site fails, the cloud realizes that machine is no longer available, and it automatically sets up a new one. Your website could be back up and running in 15 to 30 minutes without you having to do anything.

    Want an even better option?  Use the cloud to run two copies of your site on two separate machines. If one machine fails, the other is still in control and handles the load until the first is replaced—with no downtime.

    SaaS Offerings: What You Need May Already Be on the Cloud

    What This Is:  A “SaaS” stands for “Software as a Service.” This is a program you can sign up for and log into through the web. It’s not custom-made for you but instead may serve thousands of other users.  Common examples are Google Docs or Office 365, which are SaaS offerings designed to replace Microsoft Word.

    Why This Matters:  Say you’re ready to adopt a new web-based application to replace your old system and increase productivity.  However, you currently have a lot of processes in place “just because you have to” due to the old system. Is it worth the time and effort to customize your new cloud solution?

    How the Cloud Helps: SaaS offerings can offset some custom development while giving you a highly visible and agile workflow. This makes it easier to fix those bad business practices. You can now use software designed for growth and flexibility. This not only sets up your company for faster growth, it also controls costs.

    Plus, a good SaaS offering will have a well-documented and robust Application Program Interface (API).  An API allows your custom site to work with a SaaS.  For example: Dropbox for file storage and Mavenlink for project management are excellent SaaS offerings with great APIs.

    Make the Cloud Yours

    There are plenty more cloud-based features out there. Amazon’s AWS alone offers: cloud-based web servers, databases, file storage, email services, text notifications, and countless other services. All of these features set your business up for higher productivity and dynamic growth—without high overhead costs.

    Your cloud migration won’t look exactly like everyone else’s.  Take a few minutes to think about the programs you use that keep your business running and profitable.  Are they scalable? Could you recover from disaster? Is there a good, inexpensive, online way to do part of what you’re doing?

    If you said yes to any of these, welcome to the cloud.

    To learn more about a cloud upgrade, contact the team at Atlantic BT. As an AWS Certified Partner, we have the knowledge and experience to help your website reach great heights.

    *photo courtesy of Nicolas Raymond on Flickr

  • Definitive Guide to CSOS Development

    Are you currently developing or planning to develop a Controlled Substance Ordering System, or CSOS? Then you’ve come to the right place. This guide was developed to provide all the information to create such an application in one friendly document. To learn more about how this process took shape, read about Atlantic BT’s web development work for NC Mutual Drug. This work was the inspiration for our writing this guide.

    Background for This CSOS Guide

    The DEA created the CSOS audit process when client-server architecture was the only viable solution. Audit cases exist for both the “sender” machine and the “receiver” machine (i.e. Audit Cases 1 & 2). Atlantic BT’s implementation is a web application, which performs both the client and server actions. Therefore, the documentation in this guide organizes these cases together. In a true client-server environment, the cases are applied appropriately. Atlantic BT’s implementation involves Windows servers and a .Net solution using C#. Information and examples providing guidance on accomplishing validation will use technology-specific references only when necessary. It should be stated that when signing an order with controlled substances, a single digital file must be utilized for signing. This is similar to the real-life scenario in which you would sign a physical document. In Atlantic BT’s application, an EDI 850 file is constructed for signing, and will be referenced throughout as the “order file.”

    Source Documentation CSOS

    Many documents exist and are publicly available for CSOS developers:

    Note: The CSOS Certificate Management documentation is for owners of the digital certificates. If you note the “Manage CSOS Certificates” section, this documentation was created prior to Windows 8 existing and before Google Chrome was a standard web browser.

    Why Does this Guide Use “Audit Cases”?

    AtlanticBT used the Drummond Group to perform the CSOS audit. The Drummond Group uses 13 “Audit Cases” to test all requirements for a CSOS application. This guide provides information on the validation needed to pass each case, and how the audit occurs. Please note that the audit cases in this guide were in place at time of AtlanticBT’s audit; the Drummond Group may add or modify these in the future.

    Audit Cases 1 & 2

    These cases all involve verifying the code and systems use approved FIPS modules.

    Validation

    1. Go to the NIST page listing all validated modules: csrc.nist.gov
    2. Find the module your implementation will be using
    3. Be able to show in your code and on your server you are using the module you specify.

    Audit Cases 1.1, 2.1 These audit cases validate the cryptographic module residing on the machines used in validation.
    Audit Process (Windows) In order to show what version the server is running, open a Command Prompt on the server. Use the “ver” command. AtlanticBT’s solution was deployed on a Windows Server 2012 instance. This is Cert #1894 on on the nist.gov page. Note that this cert correlates to “Software Version 6.2.9200”.

    C:\> ver
    Microsoft Windows [Version 6.2.9200] C:\
    Audit Cases 1.2, 1.3, 2.2, 2.3
    Audit Process AtlanticBT showed screenshots of code, and that encryption algorithms were used from Microsoft libraries that were approved within certificate #1894. To prove the code was truly FIPS compliant, the following was performed on a developer machine:

    1. Windows not running in FIPS enabled mode, run validation on a certificate using FIPS approved modules. This should succeed.
    2. Change Windows Mode to FIPS enabled (See Stack Overflow: How to enable FIPS on windows 7)
    3. Windows running in FIPS enabled mode, run validation on a certificate using FIPS approved modules. This should succeed.
    4. Change encryption to a non-FIPS approved method. (This is a temporary code change for auditing purpose only). Run validation on a certificate using non-FIPS approved modules. This should fail.

    Audit Case 1.4 This audit case validates that certificates (called “private keys” throughout CSOS documentation) reside in an encrypted fashion whenever stored.
    Validation Encrypt any stored instances of certificates. Atlantic BT’s solution utilized the AWS S3 service, which can encrypt files using the AES-256 encryption algorithm. Some solutions may involve using a computer’s Certificate Store, which uses encrypted storage.
    Audit Process Show proof on files stored using encryption. If a custom storage solution is implemented, this would involve storing a new certificate and showing the stored file is encrypted.

    Audit Case 3 & 4

    These audit cases involve signing an order with a valid certificate, passing all validation.
    Validation No specific validation needs to be developed, but all other validations must be performed and pass.
    Audit Process An order is signed with a valid certificate, and validation succeeds.

    Audit Case 5

    These audit cases contain many of the certificate-specific validations. They relate to the validations which can be applied to the certificate and information unrelated to the order being signed. In advance of discussing the audit cases, it is important to discuss certificate hierarchy. Many of the test cases involve validating data against the “certificate chain”.
    Certificate Structure There is plenty of information on digital certificates concerning their creation, structure, and use. However, simple summaries are difficult to find. That in mind, the following attempts to offer a summary and correlate it specifically to the CSOS requirements. 1) A “Root Certificate” is created. This is used to create other certificates, and in these, the unique “thumbprint” of the root is contained within its metadata. The DEA has created root certificates specifically for use with CSOS certificates. —> There are “private” and “public” portions of certificates. The “private” portion can be used to create other certificates and are never provided (to avoid fraud). The “public” portions can be used (by anyone) to validate a certificate was issued from the root. 2) A “Sub CA” (Certificate Authority) certificate is created from the root certificate, as noted above. The same structure applies—these certificates can be use to create yet other certificates, and the Sub CA “thumbprint” is contained within the metadata. The DEA has created Sub CA certificates which are used to create the personal certificates. 3) Personal certificates are created for use in signing. Therefore, for all CSOS certificates, the “chain” of certificates is as follows: —> Root Certificate —> Sub CA Certificate —> Personal Certificate
    Server Setup In order to successfully validate certificates, the system performing validation must have the Root and Sub CA certificates installed. Having access to the certificate files is not enough; these must actually be installed. The official CSOS Root and Sub CA certificates, for use with production, can be found here:
    CSOS Certificate Management (deaecom.gov). During development and auditing, a test suite of certificates is used (see: diversiontest.usdoj.gov). The “CA CERT” folder within contains separate Root and Sub CA certificates for use with the test certificates.
    Developer Warning: Make sure the test suite root and sub certificates are only installed on local and dev servers, and not installed on production machines. To install certificates on a local (Windows) machine, use the Certificate Manager. This can found by using the Windows Key and searching for “Manage computer certificates”, or using the MMC Snap in (see: How to: View Certificates with the MMC Snap-in)

    Root and Sub CA certificates correctly installed on a Wondows Machine
    Root and Sub CA certificates correctly installed on a Wondows Machine

    In order to verify these certificates are installed correctly, you can use any browser’s certificate management and import a certificate. The following examples use the ValidOrderThree certificate from the test suite of certificates:

    The left screen shot shows Root and Sub CA certificates installed correctly. The "certificate chain" is complete and valid with the root certificate at the top. On the right hand side, the Root and Sub CA certificates are not installed correctly. The sub CA certificate cannot be found.
    The left screen shot shows Root and Sub CA certificates installed correctly. The “certificate chain” is complete and valid with the root certificate at the top.On the right hand side, the Root and Sub CA certificates are not installed correctly. The sub CA certificate cannot be found.

    Audit Case 5.1 The CSOS requirement for case 5.1 states “The system must determine that an order has not been altered during transmission.” This relates to the original client-server architecture expected when the requirements were established. The order would be created and signed on a client machine, transmitted to a server, and then final processing occurred. This case is designed to verify the data sent from the client machine to the server was not altered. In a web application, however, this may all occur on the same server. The order is submitted and signed through the web application, and all validation could occur prior to final success, all in memory. This is the case for Atlantic BT’s application. In order to provide an auditable result, the code was structured as follows: 1) The order file is digitally signed, which results in a signature file (in memory). This file is a small file containing encrypted (non-readable) data. 2) All other validations are performed on the certificate. 3) Lastly, a verification of the signature file and order file is performed.
    Code Reference (.Net) For signing, the RSACryptoServiceProvider.SignHash Method was used. For verification, the RSACryptoServiceProvider.VerifyHash Method was used.
    Audit Process In order to show success of this requirement, two signing attempts were shown. This was performed on a local machine using Visual Studio, which allows debugging and breakpoints to modify in-memory data. 1) Successful a. Break after SignHash method, show in-memory order file b. Break before VerifyHash method show in-memory order file hasn’t changed c. Show VerifyHash is successful, thus the sign process succeeds 2) Unsuccessful a. Break after SignHash method, show in-memory order file results b. Modify the order file data c. Break before VerifyHash method show in-memory order file has changed d. Show VerifyHash is unsuccessful, thus the sign process fails In a client-server environment, this could be accomplished by simply using a different order file with the generated signature file for verification. In-memory data modification would not be required.
    Audit Case 5.2 This audit case is to validate that the certificate being used to sign the order originated from the DEA for use with signing digital orders.
    Chain Building (.Net) First, a valid certificate chain must be determined. Using .Net, a valid X509Certificate2 object is created from the certificate data. Throughout this guide, the certificate being validated will be an X509Certificate2 object. Next (In .Net), a X509Chain is created, and the Build method is used the verify the chain and build the chain in memory. If the Root and Sub CA certificates have not been correctly installed, this method will fail.
    Validation To pass validation successfully, it should be verified the thumbprint of the found root certificate matches one of the official DEA CSOS Root Certificate thumbprints. At time of documentation, the two DEA root thumbprints are: “9037640ee5c71e4ced76ed88fefa4e051907f7e7” and “f23190647132a900e634badf2a8f35a95bd383d7” The test suite root certificate thumbprint is: “fb1eb3439c28e14014f2ef942f0bdd636bfef467”
    Audit Process In order to show failure for the audit case, a certificate that was not created using a chain originating in a CSOS root certificate must be tested against. There are a few methods to creating a certificate:

    CSOS Development White Paper
    CSOS Development White Paper
    CSOS Development White Paper

    When signing an order using any non-CSOS certificate, the signing should fail, passing the audit case.
    Audit Case 5.3 This audit case validates the generated signature file from signing. Similar to 5.1, this means a signing process should fail if the signature file is modified in transit, whereas case 5.1 related to the order file.
    Audit Process In order to show success of this requirement, two signing attempts were shown. This was performed on a local machine using Visual Studio, which allows debugging and breakpoints to modify in-memory data. 1) Successful a.Break after SignHash method, show in-memory signature file b. Break before VerifyHash method show in-memory signature file hasn’t changed c. Show VerifyHash is successful, thus the sign process succeeds 2) Unsuccessful a. Break after SignHash method, show in-memory signature file results b. Modify the signature file data c. Break before VerifyHash method show in-memory signature file has changed d. Show VerifyHash is unsuccessful, thus the sign process fails In a client-server environment, this could be accomplished by simply using a different signature file with the order file for verification. In-memory data modification would not be required.
    Audit Cases 5.4, 5.5, 5.6, 5.7 These audit cases validates the certificate being used to sign the order hasn’t been revoked. The DEA permanently maintains a publicly accessible Certificate Revocation List (CRL) to check for certificate validity. There are four audit cases pertaining to certificates revoked for a variety of reasons; however, the validation is performed the same way for all cases.
    Determining the CRL for the Certificate The first step in validation is finding the appropriate CRL. The CRL for any CSOS certificate is contained within the metadata of the certificate itself. Unfortunately, this needs to parsed from within a large string. The screenshot provided shows the URL value of the CRL, but this is only part of a string value which contains other certificate information.
    Using Newlines, “URL=”, “cn=”, and “(“ breakpoints, the domain and other CRL information can be parsed. The domain on it’s own will be used to establish a connection.
    Connection An LDAP connection is required to connect to DEA provided CRLs. For .Net, a “LdapConnection” object is utilized. The “Distinguished Name” is required as well, which in the screenshot is all the parameters from “cn=CRL3” through “c=US”, these are the identifiers for the specific CRL.
    Validation In order to be validated, a connection must be made, the correct CRL entry must be located, and the certificate being validated must not be found in the CRL. (If it is found, the certificate is revoked.) Therefore, the following checks should be made:

    Audit Process The audit process for these are straightforward. Revoked certificates from the test suite of certificates are used in signing an order. All attempts should fail.
    CRL and ARL Please see Audit Case 5.9 for additional information which builds upon this CRL validation.
    Audit Case 5.8 This audit case validates the certificate has not expired.
    Validation All certificates have a “Valid To” or expired date within their metadata. If this date has passed, then the certificate has expired. In .Net, there is a “GetExpirationDateString” method. The results can be used to create a Date object for comparison.

    Audit Process Again, this is straightforward. An expired certificate from the test suite should fail validation.
    Audit Case 5.9 This audit case validates that either a Sub CA or root certificate in the certificate chain has not been revoked.
    Validation Once a valid certificate chain has been established (See Audit Case 5.2), and the certificate itself has been determined as not revoked, every other certificate in the chain must be found as not revoked. The process developed for Audit Cases 5.3-5.7 can almost be used for these certificates.
    However, a disparity exists between the test suite and real-world certificates at time of this documentation. For test suite certificates, this CRL check passed validation. When performing tests against a production certificate, this validation failed. A different attribute was found when using the LDAP connection for these certificates: “authorityrevocationlist;binary”. These connections contained attributes for both the CRL and the ARL. Therefore, the resulting validation occurs for all certificates:

    • Connect to the LDAP
    • Check for the CRL attribute.

    * If this was not found, validation fails for being unable to check the CRL. * If the CRL check was successful, and the certificate was found (thus revoked), validation fails. * If the CRL check was successful, and the certificate was not found, processing continues.

    • Check for the ARL attribute.

    * If this was not found, validation succeeds. (Because it is not required to check ARL, and this will not exist for personal certificates). * If the ARL check was successful, and the certificate was found (thus revoked), validation fails. * If the ARL check was successful, and the certificate was not found, validation succeeds.
    Audit Process A valid certificate linked to a revoked Sub CA certificate from the test suite should fail validation.
    Audit Case 5.10 This audit case validates that either a Sub CA or root certificate in the certificate chain has not expired.
    Validation Once a valid certificate chain has been established (See Audit Case 5.2), and the certificate itself has been determined as not revoked, every other certificate in the chain must be found as not revoked. The process developed for Audit Cases 5.8 can be used for these certificates.
    Audit Process A valid certificate linked to an expired Sub CA certificate from the test suite should fail validation.

    Audit Case 6

    CSOS Development White Paper

    This audit case pertains to the drugs specified in an order, and that the signing certificate has the authorization to purchase these drugs. The DEA has seven “schedules” or “schedule access” for controlled substances. (See:
    Controlled Substance Schedules
    ) These are 1, 2, 2n, 3, 3n, 4, and 5. Each CSOS certificate contains in its metadata the list of authorized schedules that can be purchased.
    Order Schedules The order file which is being signed must contain the list of drugs in the order, and the schedule access for each drug. The list of these schedules must be gathered for comparison.

    Certificate Schedules Each certificate contains many fields of data. The metadata field for OID “2.16.840.1.101.3.5.4” contains the 8-bit field related to available schedules. (See CSOS Certificate and CRL Profile (PDF) 6.2)
    Please note it is only the last 8-bit value that specifies the schedule access. The number of octets in the field may vary, it is always the last octet which should be used.
    This 8- bit field correlates to the 7 schedules. (See CSOS Certificate and CRL Profile (PDF) 3.3.4) Bit 0: Schedule 1 Bit 1: Schedule 2 Bit 2: Schedule 2n Bit 3: Schedule 3 Bit 4: Schedule 3n Bit 5: Schedule 4 Bit 6: Schedule 5 Bit 7: Not Used In the provided screenshot, the “03 02 01” does not indicate access to schedules 3, 2, and 1. Instead, the “5e” maps to schedule access. In Hex to binary, “5” maps to “0101” and “e” maps to “1110”. Therefore, the 8 bit field for this certificate is “01011110”. This certificate would have access to all schedules except 1 and 2n.
    Validation The order schedules must be compared to the certificate schedules. If the order contains any schedules not approved within the certificate schedules, validation fails.
    Audit Process A certificate not authorized to purchase drugs within the test order should fail validation.

    Audit Case 7

    These audit cases pertain to authorization related to the CSOS application and the signing certificate. Access to the application must require a login, and in-memory retainment of the certificate and certificate password must be tightly controlled.
    Audit Case 7.1 A user must login with a username and password into the CSOS application used to create orders containing controlled substances. Creation of these orders are not allowed in applications open to public access. Also, users in the system must be able to be invalidated.
    Validation Standard username/password (or biometric) authentication must be implemented on the application.
    Audit Process

    • A new user to the system is created
    • Login is successful for the new user
    • An order is created and signed by the new user
    • The user is logged out
    • Access to the application is revoked (via admin or through database changes)
    • Login is unsuccessful

    Audit Case 7.2 Separate from application login, a CSOS application is allowed to re-use a user’s certificate for 10 minutes after the user provides certificate credentials. This could be handled in a variety of system architectures. The following lists some, others may also be valid.

    • An individual has a unique login to the system. A single certificate is associated with the account. Access to the entire application is timed out after 10 minutes of inactivity.
    • After a user logs into the application (with any timeout), the user must provide the password for their certificate (possibly a username as well). After successfully signing an order with these credentials, the user could sign another order without re-authentication for 10 minutes, otherwise they have to re-authenticate the certificate.
    • After a user logs into the application (with any timeout), the user must provide the password for the certificate each time they sign an order. This is the approach in the AtlanticBT solution.

    Validation As noted above, AtlanticBT’s solution required the user to supply the certificate password on every signing attempt. Other approaches may pass this validation as well.
    Audit Process Auditing the solution is a case-by-case basis. For AtlanticBT, showing the certificate password was required on a second order directly after a first order was signed was sufficient.
    Audit Case 7.3 If the password for the user’s certificate (private key) is retained in memory, this must be cleared from memory when a timeout occurs (as in Audit Case 7.2).
    Validation If the password is retained in memory, it must be cleared on timeout.
    Audit Process Not required if password is not retained in memory. Otherwise, code debugging (or some other display mechanism) would be required to verify memory is cleared.

    Audit Cases 8 & 9

    Computers used to sign and validate orders must be synced with the NIST time servers. There is a 5-minute variance allowed. See NIST Internet Time Servers for a list of NIST time servers. Note that “time.nist.gov” is the global address which should be used.
    Validation Any server or computer used in the signing process must be synced with the NIST time servers. For Windows machines, it may enough to set the time server through configuration. (See: How to use Alarms & Clock app “synchronizing with an Internet time server”) Atlantic BT’s solution deployed the web application to AWS EC2 instances. These servers stay in UTC time automatically. Other web servers can be configured to sync explicitly with the time.nist.gov servers. Alternatively, a connection with the NIST time servers can be established at time of signing. Atlantic BT added this as well, and accomplished this by sending a UDP packet, expecting a response, and parsing the response for current date and time. Many methods may achieve the same result. If the difference between the NIST time and the system time is greater than 5 minutes (either ahead or behind) then validation fails.
    Audit Process If time server synchronization is (only) used, auditing involves occasionally changing the system time, and returning later to find the time having synced back to NIST time. If at-signing validation is used, the server time can be changed more than five minutes ahead of the NIST servers, and attempt signing an order. Validation should fail.

    Audit Case 10

    The order file which is being signed must contain a minimum amount of data. This does not have to be verified at time of signing through code, but the audit process involves inspecting the file used in validation.
    Validation The order file must contain: A unique tracking number, made of of 2-digit year, ‘X’, and 6 digit number; supplier’s name, address, and DEA number; purchaser’s DEA number, order date, and ordered substances name & strength (or NDC), quantity, and number of packages.
    Audit Process Auditors will inspect the order file of a newly created order.

    Audit Cases 11 &12

    The DEA requires any information related to orders be archived together in a location identifiable to that specific order.
    Validation Any solution is valid where this information is stored (for a minimum of 3 years). The order file and signature file should be included at a minimum. Any other data generated specific to a signed order should also be archived. For example, if digital files are generated when an order is acknowledged, or upon physical receipt to the purchaser “checking-in” the contents, these should be archived as well.
    Audit Process Auditing the solution is a case-by-case basis. Likely this will involve browsing to the archive location for a newly created order.

    Audit Case 13

    CSOS Development White Paper

    This audit case verifies the certificate used in signing was issued to an individual associated with the purchaser. A pharmacy (as one example) is provided a DEA number for use with purchasing controlled substances. This number is required to be associated with every order the pharmacy makes. When a certificate is issued to an individual, they are signing orders on behalf of the purchaser. Therefore, the DEA number for a pharmacy is part of the metadata for any individuals signing orders for that pharmacy.
    The DEA number is not explicit in the metadata for the certificate. Instead, a SHA-1 (or SHA-256, see below) hash of the purchaser’s DEA number joined with the certificate serial number is. (See CSOS Certificate and CRL Profile (PDF) 3.3.7)
    Validation In order to validate a certificate, the DEA number of purchaser must be known. To validate: 1) Parse the Serial Number from the certificate. This can be done using the certificate’s “subject”. (See screenshot) 2) Concatenate the DEA Number with the Serial Number 3) Create a hash of this string.
    Note: The Public Key Infrastructure Analysis document indicates the hashing algorithm is SHA-1. It is undocumented that the “Signature algorithm” field of the certificate indicates whether a SHA-1 or SHA-256 algorithm should be used. The screenshot shows a certificate using the SHA-256. 4) The metadata field for OID “2.16.840.1.101.3.5.7” contains the hashed value for comparison. (See CSOS Certificate and CRL Profile (PDF) 6.2) 5) Compare the hashes. If they match, validation succeeds.
    Please note that in development, it was determined there was a whitespace offset of the field in the certificate. Documentation of this offset could not be found. For SHA-256 it was a 2 character offset of hex values “0x04” and “0x20”; for SHA-1 it was a 2 character offset of hex values “0x04” and “0x14”.
    Audit Process Signing with a certificate associated with a DEA number different than that of the one used to create the test order should fail validation.

    If we can be of any assistance regarding CSOS, please drop us a line!

  • Website Discovery Strategy That Will Lead You to Success

     
    You’re working with a web design partner, developing a corporate website or application. The research phase has been completed. Cheers! Both teams understand:
    • Your company goals.
    • The features you’re interested in.
    • Your application’s users.

    What’s next?

    It’s tempting to want to jump in and start working on homepage designs. Web design is exciting and sexy—I’m always blown away by the creativity of our designers. But both visual and technical design needs a foundation. The strategy is key. It ensures that your design is functioning at its best. Would you ask an interior designer to plan their work without seeing concept drawings of a house? That should only be the basis of a terrible reality show. These specific design ideas and plans are what we talk about with strategy.

    This website design didn’t come from a vacuum. The designers relied on concepts developed during the site strategy phase.

    Let’s keep going with the corporate website example mentioned above. It will likely need a Content Management System (CMS) to edit or add new web pages. WordPress is a popular and powerful CMS. Yet, if you have complicated and unique needs, WordPress may not be the right platform.

    Writing the content is another concern. Have you only written content for print marketing or articles? If you haven’t written content for the web before, get ready. Web content writing may be different than what you’re used to. A content strategist can help provide guidelines (or write it for you!).

    Are you planning to have a single page to list and describe all your services? Your users (and your SEO) may be better served if each service has its own page. All these concerns show why this strategy phase is so important to the success of your project.

    Rounding Out Website Discovery

    The strategy phase is part of Discovery, the first “D” of our 5D process for projects. The discovery team has worked with you to verify what, why and for whom. With strategy, we focus on the how. There are three notable aspects of strategy in this last half of the phase:

    • Technical (the development process)
    • Content (how to word your message)
    • Information architecture (how is the content organized)

    Visual designers will be a part of this process. But you won’t see the majority of their eye-catching work until these strategies are in place.

    What should you do now?  A well-structured discovery process is flexible. Here at Atlantic BT, we will only use methods that will add value and understanding to your project. For example, a content strategy may be unnecessary in a data-heavy custom application. Remember, new information could always change that direction.

    Data Architecture

    What is Data Architecture? It’s the development of the basic information structures needed to build a website or app. This usually includes identifying three things:
    • Content types for CMS
    • Data Schema for an application
    • Taxonomies
    This architecture starting point is for technical specifications. Now you know what type of content is available to use. This knowledge is exactly what Information Architecture and Visual Design needs.

    Information Architecture

    Information Architecture (IA) is all about organization. It brings clarity and cohesion to the content within the site or system. IA is often used to define site structure. It’s the technical organization of information. This helps inform the navigational structure. It also directly impacts total site usability and makes finding information easier.

    Content Strategy

    Content Strategy helps our clients decide how they will present their brand to the world. It creates a data-driven and creative outline for communicating services and products. The data-driven side relies on analytics and other findings from our Discovery. The creative side maps out possibilities for our clients’ web presence. Strategists and designers collaborate on messaging. As a result, our clients receive a customized and reliable plan for their content. In conclusion, the organization, publication, and promotion of their content will strengthen their brand’s reputation.

    User Workflow Research

    the User Workflow Research phase of web design is exactly what it sounds like. It consists of mapping a user’s process flow for using a site or system. Then it compares this map against the ideal business process flow. A gap analysis provides areas of focus by revealing missed opportunities for engagement. The research can also provide more insight into ‘how’ users interact with the site or system. Yet, its highest value comes from understanding what the majority of users ‘want.’ As a result, a site’s design can give users the UX they desire.

    Requirements Matrix

    All stakeholders and team members must be on the same page for the project to be successful. They must have a shared and clear understanding of the project’s goals. For a technology project, the process will include defining key features the site will provide. Most of all, it’s essential to understand how those features will be developed, impacting the technical specifications. A requirements matrix will also inform visual web designers on features to include in mockups.

    Concept Board

    A concept board is a type of collage consisting of images, text, and samples of objects in a composition. It’s inspired by a set topic or can be any material chosen from across the project. We use them to share visual and thematic ideas based on a project topic. Concept boards can also show how a legacy site or application differs from modern examples.

    The mood board allows designers to share visual notes on the look and feel of your website or application.
    The concept board allows designers to share visual notes on the look and feel of your website or application.

    Discovered and Ready For Design

    The next step in the web development process is Design. This is where all the details get ironed out. We put in place:

    • What information is on each page
    • How pages and page elements look
    • Where data goes when someone clicks a button
    Now that you’ve completed the Discovery Research and Strategy, your web project is set up for success as it moves forward. You are in a great place for the Design, Development, and Deployment processes.
     
    Ready to take the leap and get started with your next web design project? Contact Atlantic BT today to schedule a free consultation!
     
  • Discovery Research: It’s Not Just What You’re Building

    Let’s imagine you have a business website or application you need built. In this case, we’ll say it’s a corporate marketing site to inform services you provide, generate customer leads, and promote job openings for prospective employees. Your staff isn’t technical, so you go to a vendor to have this site built. Some agencies may dive right in, saying, “Sure, got it. We’ll go build it now.” No questions, no research, just build the site and deliver it as soon as possible. What could go wrong?

    A good digital partner understands a website is more than just features. Features may be what is being built, but it doesn’t answer why. (“Why” may sound like a superfluous question, but see Randy Earl’s article Are You Looking at the Trees or the Landscape? on the power of this question.) Take promoting job openings from our example of the corporate marketing site. Is your company trying to attract the top talent in the area? In the country? Or, have you had an influx of funding in order to grow extremely quickly? Knowing your company’s goals can heavily influence both the features and the visual design. Maybe you need a classy screening tool to get only top talent, or maybe you need a pay-per-click campaign to heavily promote a warm and welcoming job listing page.

    Who is browsing the site also has a large impact. When trying to portray your brand and acquire leads, are you targeting millennials or CEOs? Maybe they’re millennial CEOs? In our example, we’ve already identified new customers and employee prospects as two different user groups. Shouldn’t the vendor building your site understand these groups in order to try and appeal to both?

    Great Projects Begins with Research

    At Atlantic BT, the first “D” in our 5D Process for projects is Discovery.  (The other four phases are Design, Development, Deployment, and Delight; stay tuned for more posts on each.) This phase starts with research, and a primary goal is to answer these questions of what, why and for whom. Armed with an understanding of company goals, requested features, and the target audience(s), we can begin creating a comprehensive strategy for a successful website.

    The research half of our Discovery Workflow

    So you’ve chosen your digital partner, and you’re assigned a discovery team. What should you expect from their research (which is a broad term, so could mean anything.)? Here are some methods Atlantic BT uses in discovery. 

    This list isn’t comprehensive, as a well structured discovery process is flexible. We only use methods that will add value and understanding to your project, and new information could always change that direction. Some methods, such as the content inventory, are only applicable if we are migrating or redesigning an existing website.

    Stakeholder Interviews

    Stakeholder Interviews are simple, semi-structured interviews with key members in the client organization; typically members of the executive team and any closely related functional management. These interviews focus on high-level requirements and business goals, and generally avoid technical specifics of features, functionality, content, or any other detailed aspects of a solution. The objective at this level is to focus on defining the need, not the solution. This information helps determine further steps in the discovery process, and is useful material for the entire project.

    Lean Canvas

    The Lean Canvas is a one-page business model diagram which summarizes the key elements of a comprehensive business plan. It focuses on problems, solutions, key metrics, and competitive advantages. This workshop is instrumental for new product development, especially entrepreneurial ventures. This method enables us to identify opportunities in multiple aspects of the business process where we can help the client to leverage technology to achieve their vision for success.

    Content Inventory

    A content inventory is the process and result of cataloging and organizing the entire contents of a website. It functions as a quantitative analysis of the site, answering the question of “What is there?” With a full inventory, we can better understand the breadth, depth, and general volume of information on a site, as well as existing technical issues, configurations, and functionality. This inventory can be the basis of a follow-on content audit, utilized for information architecture, content migration and general digital strategy of a new site.

    Analytics Review

    An analytics review is an opportunity to explore the existing web analytics data of a client’s current site or application. The objective of this review is to aid other departments with critical data points to better understand the existing architecture, traffic patterns, trends, outliers, and usage points of the site. Results can help reveal hosting and architecture needs, identify user groups, or provide guidance in the content strategy.

    Technical Audit

    We designed our technical audit to identify existing programmatic functionality and configuration. It focuses on features which include database access, internal or 3rd party API communications, advanced forms, or other custom programming. The audit is not designed to specify which features will or will not be implemented in a new site. Instead, it works as a reference when discussing features to implement and when planning data migration.

    User Understanding

    User understanding research encompasses all activities/exercises that aim to improve insight into who the users of a given site/system are, and how they use or interact with the site/system. This research also helps validate (or refute) any preconceived notions or opinions around the user base. User research can take any number of forms, including persona workshops, contextual interviews, focus groups, or surveys. We choose the research techniques based on client need.  The output of this research is greatly important to the rest of the project, and especially critical to methods such as information architecture, content strategy, and visual design.

    Wrapping Up Research

    Research is just the first half of Discovery. Now that you and your digital partner understand why you’re building a website, who it’s for, and have some requirements for what’s to be built, it’s time for strategy. Coming Soon: Continue your project journey with details about the second half of Discovery, Strategy.