Atlantic Business Technologies, Inc.

Category: Managed Services

  • Google Local Business Listings Now Have Analytics

    lbc_logo_en

    Google now incorporates local business listings on two levels: local-based searches and broad-based searches. Because of these new methods it is extremely important to have a complete and detailed local business profile.  Similar to Google’s algorithm for ranking websites in the organic SERPs, there are a number of local search ranking factors that contribute to ranking well in the local business results.

    There use to be no easy way to know how much traffic you gain from your local business listing, until now!  Google has just integrated basic analytics reporting within your local business center.  Not only can you see the basic data like impressions, you can also see key actions people take on your local listing.

    Your local business profile has become much more relevant than ever before. We highlight 3 key benefits that make a world of difference.

    1. Google has defined a local business listing action as someone clicking for more info on the following: Maps, requesting driving directions, and clicking through to your website.

    google-local-business-analytics1

    2. You can easily see what keywords and phrases people are searching to find your local listing.

    top-search-queries

    3. Google has added the ability to track people who have requested driving directions from your local business listing.  For brick-and-mortar stores, this could help you identify how far people are willing to drive to get to your location.  In turn, if you are doing PPC advertising you can try to geo-target those locations.

    geo-location

    I’m sure that this is only a sign of things to come to better enhance the tracking capabilities for your local business listings.  You may want to leverage this data by adding new content (videos, images, logos, business description) to your listing to see if it helps increase impressions, clicks, driving direction requests, etc.

    Sign up and verify your local business listing in Google, Yahoo, and MSN.  You may also want to check out quick tips for local SEO, for some helpful guidelines on how to help your website rank higher for localized searches.

  • Getting Caught in the Pay-Per-Click Trap

    Recently Mark Thompson wrote a post on Search Engine Optimization vs Pay Per Click Marketing and which marketing strategy is more effective. This post is focused more on pay-per-click marketing and something we call the “PPC Trap.”

    Here’s how the PPC trap works:

    A business jumps into online advertising by bidding on a few keywords on a search engine. At first, competition is low and the bid price for the keywords is also low, so the business is able to grow and prosper online. As time goes on, more competition enters the market and bid prices steadily increase. By now the business is dependent on the flow of new business leads from the paid advertising, so they double-down and increase their spending to maintain top positions and traffic. As spending increases, conversions decrease because there are more competitors aggressively competing for each customer.

    We call it a trap because companies we’ve worked with (and we were brought in to help) were literally trapped by their paid search engine advertising. The operation they have created around the business generated from the PPC advertising creates fixed overhead/expenses so they are forced to continue advertising just to sustain their operation. However, simultaneously they are breaking even (or worse) from their ongoing operations due to the high cost of advertising.

    Breaking the cycle can be difficult and requires some delicate work to bring the business back to profitability and eliminate the PPC dependency. In many cases it simply involves a focus on fundamentals including increasing conversions, increasing revenue per customer, reducing overhead, and a focus on natural search engine optimization. However, in some cases the business fundamentals can be so upside down that the only option is a bankruptcy reorganization and subsequently focusing on the fundamentals described above.

    If you think you might be at risk of falling into the PPC trap, the sooner you identify the issue and begin working on solving it the stronger your business will be in the long run. Even if things are going well and profits are strong, if your business is based in a large part on PPC advertising it would be smart to diversify your advertising and marketing a bit and concentrate on building competitive advantages wherever possible.

  • Beyond HTML 5 and CSS 3: Sample of Suggestions

    Since HTML 5 and CSS 3 are still in the working drafts, I thought I would propose a few ideas I’ve been cooking up that would make my life easier and perhaps add an edge to the constantly evolving standards. Although, I hope these ideas never become a piece of proprietary junk that Internet Explorer or Mozilla latches onto on their own. I would hate to aid non-standards.

    I realize some of my ideas may go beyond the so-called “scope” of the original intentions of CSS and HTML/XHTML. But my hope, like all “

    I’ll begin with HTML first, since semantic markup is the most important:

    1. Custom Elements

    I’ve read some of the working draft for HTML 5 and I must say that I’m impressed with some of the ideas that have been added. It’s going to give more meaning to HTML code, which is precisely what web developers need! However, I did cringe at the proposed “header” and “footer” elements. I understand their purpose, but aren’t the words header and footer presentational? For instance, what if at first I designed the footer to be at the bottom of the document. In the markup, perhaps it is. But later (for some odd reason) the client decides he/she wants that info to be moved to the top? Would this still be considered the “footer?” I’m sure there are several sides to the argument, but I’ve also read accounts where people like Andy Clarke have proposed the same. Footers should be “siteinfo” and headers should be “branding” or “masthead.” In contrast, Dan Cederholm and Andy Budd still use “header” and “footer” on their personal sites. But regardless of who uses what, I still feel these names are presentational and HTML is for content not for its appearance or layout.

    I really like most of the other elements like “menu,” “section,” “dialog,” “aside,” “datagrid,” etc. They really give more meaning to the markup in place of the generic div’s with id’s. This will definitely give more power to CSS and more meaning to the document at the same time. Specifically it allows you to separate your styles with semantic element names and giving them unique rules with really low specificity instead of making them arbitrary div’s. Simply put, this will give you more control with id’s when they’re needed.

    But what I’m proposing is that instead of giving set element names that will always be debated on their semantic merit, why not give that power over to the developer? If a developer could declare his/her own elements in the head of the document, he/she could have more control. I envision it working something like this:

    
    		<head>
    			<elements type="elements/text">
    				section : div;
    				dialog : div;
    				menu : ul;
    				masthead : div;
    				newelementname : baseelementbehavior;
    			</elements>
    		</head>
    
    				

    In this element named “elements” the code would be handled by the browser. It would read each line, separated by a semi-colon, and take note of the custom elements being declared and determine their base behavior off of the base element listed after the colon. For instance, the first custom item “section” would behave like a “div.” This would give ultimate control to the developer to make his/her document follow its purpose semantically. This will also open up the possibilities for more microformats! A developer could utilize this to create <tel> elements, <adr> elements, <product> elements, and more, replacing the annoying <div class=”product”> and more nested, loosely based “classitis.”

    2. Definition List Item

    I can’t recall where I saw the <di> element once, but I’ve been unable to relocate it through my web searches.XHTML 2’s working draft describes a definition list item, but I wonder why this element is still not supported well, if at all. And I didn’t see any proposed plans for it in the HTML 5 working drafts. I feel that this would be a good addition because it provides one more “hook” for styling and more meaning as well. The definition item would separate definition terms and their descriptions from others. If I was creating a list of definitions, each term and its description or descriptions should be separated logically from its siblings. Thus the <di> element would encase and separate each item.

    3. CSS Snap Declaration

    My third proposal is for a CSS 3 style declaration called “snap.” This can replace the need to use javascript to snap elements to non-parental elements. This “snapping” has been done via javascript on many old versions of dropdown menus. Now with CSS and better standards, the snap is no longer needed for menus. But the need for snap in other presentational ways is still warranted. What if I wanted to “snap” one element in the lower portion of the document to another element that was completely unrelated? I could simply add it to the style rule:

    
    		#snapping-element {
    			display: block;
    			width: 300px;
    			snap: #host-element;
    			left: 0; top: 10px;
    			…
    		}
    				

    In this example, the element with the id “snapping-element” will be snapped to the element with the id of “host-element.” Host-element, however, probably should be positioned relative; and the snapping-element’s position will be based off host-element’s relative state. Much like how absolute works, except that absolute is limited to basing itself off of ancestral elements.

    4. Column Hovers

    Placing a hover on a table row is a cinch. But what if I wanted to create a triangulation effect for the rows and columns with a different hover state color? I can’t! I’ve seen others mention it in posts, but I haven’t seen any plans to integrate this into CSS. Why not? This would provide another layer of accessibility/legibility to users with a simple col:hover rule. Of course the colgroup elements must all be declared in order for this to work.

    5. Javascript-type Psuedo-classes

    My last proposal to CSS 3 is probably the most CSS-scope-defying of them all. What if there were more pseudo-classes available other than just visited, hover, active, focus, etc? Why not onclick? I realize focus is currently a bit of a Mozilla proprietary state, but it’s a really good one! If javascript already handles onmouseover (hover) and onblur and onfocus (focus), why not onclick?! Man would this be powerful! All javascript toggling could be handled in CSS! But this event catching is a bit more daring for CSS’s scope of just styling. But why not, eh?

    Conclusion

    Perhaps some of this has been mentioned before me. I’m not sure. If not, then I’m glad to be able to provide some recommendations. If so, I apologize for assuming credit, that’s not my intention. I do however feel that these would provide a lot more meaning to HTML 5 and more power and control for the visual and presentational experience to CSS 3.

    Any comments, suggestions, critiques are happily welcomed. Just some food for thought.

  • CSS 2.1 Selector Specificity

    I’m probably the 100th person to write about CSS 2.1’s selector specificity, but I’m going to take a stab at it anyway since it seems to be such a stumbling block for so many people.

    I’m not writing this to take away from the brilliant explanations of Andy Clarke, Patrick Griffiths, Eric Meyer, or Molly E. Holzschlag, but to merely supplement their posts with, perhaps, another angle. Many people feel the need to re-explain this topic in layman’s terms rather than enduring W3C’s overly technical explanation about specificity.

    What is specificity?

    To be brief, it’s the applied calculation of the priorities of CSS selectors and how they cascade through a stylesheet. Simply put, selectors with a higher specificity will overrule other selectors in the cascading order.

    How it works

    There are two ways to determine specificity — the “right” way and the “quick and dirty” way. According to W3C, the specificity is officially calculated using four numbers concatenated, like: a, b, c, d . The “quick and dirty” technique is to assign values to each type of selector and add the values up. For example, general elements have a value of 1, classes have a value of 10, ID’s have a value of 100, and inline styles have a value of 1000. However, this value-based “quick addition” technique is a bit misleading because it presumes that 10 of any value will override the next highest selector. Following that logic if you have 10 ID’s in a style declaration those will override an inline style, because 10 ID’s are worth 100 x 10 which equals 1000 — inline styles being worth 1000. This statement is far from true. However, it can still be used as an easy way to get an idea of the specificity of a particular selector in a declaration; but it should never be fully relied upon.

    The W3C states that Concatenating the four numbers a-b-c-d (in a number system with a large base) gives the specificity. This is the correct method to rely on. The reason is because it separates the values into 4 categories: a, b, c, & d. The variable a is reserved for the number of inline styles and has the highest priority. While b is for the number of ID’s, c is for the number of other attributes (including class, but not ID’s) and pseudo-classes, and d is the number of elements. This is the correct order of specificity and pseudo-elements are to be ignored.

    Let’s focus on b and c since these are the subjects of confusion. ID selectors (b) are the most valuable asset to CSS, so they are given the second highest priority, next to inline styles (a). ID selectors are written with a # in front of the name given to the ID. So, #content is referencing <div id="content"> which has a value of 100 using the “quick and dirty” method. These selectors, like all selectors, can be used in combination with any other selectors. For instance, #main-area #content would add up to 200.

    Class selectors and other attribute selectors are assigned the variable c. These are each given a value of 10 using the “quick and dirty” method. Class selectors are denoted with a dot (.) before the name of the class, like .box. Attribute selectors are declared with the attribute name inside a set of brackets — a[rel="friend"]. It is not necessary to include the value of the attribute. Additionally, the = can be replaced with ~=, *=, ^=, or $= depending on how you’re equating the value. A better concept of these equations can be found on 456 Berea St’s CSS 3 selectors explained post.

    Note: Attribute selectors of CSS 2.1 are supported by most modern browsers, and when I say modern browsers I do not mean Internet Explorer 6.

    Initially what confused me about attribute selectors and their specificity was whether or not using class= or id= in the attribute selector made it behave like the # (ID) or . (class) of its CSS 2 predecessors. After doing some testing and reading, I found that the attribute selector has the exact same specificity as the class selector (.), regardless of whether it says id= within its little brackets. It will always have a value of 10. Thus, div[id="content"] is less specific than div#content.

    Here’s an easy way to visualize all of this (in a poorly misunderstood table element):

    exampleinline styleID selectorsclass/attribute selectorselement/type selectors“quick and dirty” value
    style=""1,0,0,01000
    p0,0,0,11
    p em
    0,0,0,22
    p.whatever0,0,1,111
    p[id="whatever"]0,0,1,111
    p[href="whatever"]0,0,1,111
    #whatever p0,1,0,1101
    #whatever p.whatever0,1,1,1111
    #whatever1 #whatever2 p.whatever0,2,1,1211

    If you haven’t seen any specificity charts before I hope my interpretation helped explain it further. Be sure to check out the other explanations too, because Andy Clarke has an awesome example using siths from Star Wars. Maybe I’ll make a diagram of my own someday.

  • Social Media Sweeping Washington…Jump on the Bandwagon

    Just last week we spoke on this very website about the importance of social media for your business, whatever your business may be.  In case you’ve been living under a rock or disagreed with the fact that businesses of all shapes and sizes need to jump on the bandwagon, this latest gem of news might finally sway your opinion.

    Would you believe us if the biggest “company” in the United States bought into the frenzy of social media and social networking?  Well guess what? It did. The country’s biggest and most powerful “business,” the United States Government, has officially succumbed to the social media frenzy.  The White House has an official blog on Facebook, MySpace and Twitter.  Yes, you can get direct tweets from our White House.

    When prompted why the Government, who previously has been a notorious stickler for being overly-private and detached from its citizens, has suddenly jumped on the social media bandwagon, here’s what they had to say:

    “Technology has profoundly impacted how and where we all consume information and communicate with one another. WhiteHouse.gov is an important part of the administration’s effort to use the Internet to reach the public quickly and effectively, but it isn’t the only place.

    There’s a lot to talk about right now. From an economic crisis to wars in Iraq and Afghanistan, the President and his Administration have a full plate – not the least of which is making sure the public stays up-to-date and involved in our efforts.”

    So if you were still not convinced as to why social media is right for you and your business, maybe we should take a cue from Obama and our Government who believe it is a useful and vital tool to helping them keep citizens well informed.  Don’t get left behind, stay in front with social media.

  • Web Hosting Economics 101

    These days everyone is looking to save a few bucks, but is your web hosting account a good place to start? Let’s explore the economics of web hosting a bit so you can see what you’re really paying for.

    With hosting prices as low as $5 a month you have to ask yourself what kind of service you can really buy for that amount of money. After all you can’t even buy a good fast-food meal for that price.

    The economics for cheap hosts is roughly the same as it is for high quality hosts who might charge as much as $75 a month for what appears is an equivalent account. Cheap hosts still want to make a profit just like the quality hosts. They just have to do more with less so corners need to be cut.

    Here are some examples:

    Internet Connectivity/Bandwidth – You may not think so, but there are different grades of bandwidth and different ways of delivering it. Technically awebsite can be hosted off a cable-modem, which isn’t to say this is what cheap

    hosts do, but they often use the least expensive providers of bandwidth and often there is no redundancy in their networks. It’s up to 50% cheaper than doing it the right way with tier1 bandwidth providers and redundant routing through different providers.

    Power – Power can be one of the most expensive parts of delivering hosting or datacenter services. Leading hosting providers use redundant power circuits with battery backup and generator power, cheap hosts normally have a single feed to each server and fortunately often have battery backups and generators. Often the power capacity isn’t sufficient so if there is a power outage some services still get interrupted.

    Servers – Leading hosts use high quality server-grade hardware from reliable manufacturers, cheap hosts often use PC-grade hardware. Under continued high load PC-grade hardware doesn’t stand up well to the load and can fail.

    Support – Hiring good people with technology experience that know what they are doing isn’t cheap. Often cheap hosts hire entry-level people to follow predefined support scripts to save money. The relative number of hosting accounts per support rep is often higher for cheap hosts, which means each rep spends less time per customer each month.

    Performance and Site Density – Each server a hosting company purchases costs them money in a couple different ways. 1) initial hardware cost 2) operating system cost 3) setup cost 4) maintenance cost 5) power and cooling costs 6) rack/space costs  The more sites you can put on a single server the better your economics get from a utilization perspective. The only trouble is that if a server is overloaded it won’t perform well and your site may either load slowly or not at all. As a general rule of thumb you wouldn’t want any individual server to run higher than 50 or 60% utilization during normal usage. This is to allow sufficient headroom so that during peak utilization (such as an unexpected spike in traffic for several sites) the server can easily service the spike and overall performance won’t be affected. The economics of cheap hosting dictate a higher utilization rate and consequently performance is often affected.

    Backup – Backup software and storage is expensive. Proper backup routines create several full copies of data and often many partial copies for daily incremental backups. In most cases the costs to do proper backups put them completely out of the realm of something a cheap host can provide. If you are working with a cheap host you better make sure you are performing your own backup routine.

    Conclusion:

    Our normal hosting plans start at $25/month. The difference between a cheap host and our quality hosting plans is $20 or less, so the question you should ask yourself is “do you feel lucky?”. If you’re lucky, your cheap host will perform just fine and your business won’t be affected. If you’re not lucky, the $20 you saved just might cost your business a whole lot more in downtime and aggravation.