Sunday, 30 November 2008

Software Trial Periods: How long before customers buy?

With the November releases of SliQ Invoicing and Quoting (Standard and MC), I made a change to the format of the product and unlock codes. The idea behind this was to simplify the process for users, making it easier to check if a product code was correct. The new format also makes it easier to generate an unlock code. The new unlock code format is also longer - meaning that people will be less likely to try and type the code in by hand. This should reduce the chances of the unlock code being mistakenly typed. On the advice of a fellow software vendor, I now use the customer’s identity - land and email addresses in the code making it easier to match codes to customers in the future.

I've always wondered how long people use my software before purchasing. People have up to 30 days free use before they need to buy but until now I've had no way of gauging how long people try before buying on average. With the change in the code format, I've been able to tell whether someone download the software before or after the change. Previously, I’d read posts from other shareware authors or marketing people advising that people tend to buy more or less immediately - within hours - if they are going to buy. The longer people leave between trying and buying, the less chance of a purchase. Although not a scientific test, in the three or so weeks since the last release, 90% of purchasers still use the old format code. I'm taking this to mean that, at least with my products, most people take pretty much full advantage of the 30 day trial period.

Of course, I could get worried by purchasers still registering with the old product codes. With the credit crunch I could assume that I’m not getting any new customers and I’m just exhausting the supply of people who downloaded a trial a month ago. However Google Analytics is actually showing an increase in traffic over the past 3 weeks and my download bandwidth has increased too. This means I'm probably getting proportionately more new trial users. The sales haven't dropped off either, which I was kind of expecting for business-related software in the run-up to Christmas.

If all this means that most people take advantage of the trial period then I’m glad. I want people to use the full trial period to make sure they are happy to purchase. Hopefully it reduces the support overhead in the long-term since those people who do buy will be more happy with the features the software provides.

Friday, 28 November 2008

Remote Support Access

For a while, I've been looking for a way of improving support to customers. If a customer is confused by a feature or we can't understand the problem they are trying to describe things can be difficult. The only real way to move forward in such situations is to see what the customer is actually doing on their PC. Site visits are not really possible - for cost reasons if nothing else - so I've been looking for a way of sharing PC desktops remotely over the internet.

Discussions with friends raised a number of possibilities - Webex, Windows Invite a Friend and NetViewer were mentioned. The cheapest options is Windows Invite a Friend - it comes free with Windows XP and Windows Vista. I tried it our on a pair of PCs in our office but found that:

  1. You have to explain to the client/ customer how to get the service going and send an invite for support.
  2. The help pages linked from XP's help are no longer present on Microsoft's website.

Both of these points make me wary of using Invite a Friend - they wouldn't make SliQTools look professional.

So I took at NetViewer. This seems a reasonable service - the cost is good and the service works well. The support technician sends an invite to the customer, the customer downloads a small client program (linked from the support invite email) and gives access to his PC to the support person.

To see an alternative, I took a look at LogMeIn Rescue. This turned out to be the Rolls-Royce remote support service. It's a really good package, working more smoothly and with a more professional, friendly feel for the technician and customer. The only downside is the cost - 4 times that of NetViewer. Overall though, I think you get what you pay for and LogMeIn Rescue seems like a good choice.

Wednesday, 12 November 2008

Free Directory Submission Software

It's about 3 months since I made a new release of my free directory submission tool, SliQ Submitter. Since I made the release, I've been busy on other projects. One of those projects is a faster directory submitter that should make the whole submission process much quicker - perhaps as little as 1 or 2 seconds if the directory doesn't have a captcha.

SliQ Submitter was my first attempt at writing directory submission software. Initially I made 3 releases very soon after each other - first with a free web directory list containing 450 directories, quickly followed by 2 more releases until the package listed over 2000 web directories. I initially tested submissions to all the listed directories and was confident that all directories worked and would accept submissions.

Soon after the last release though, I realised that web directories don't stand still. Before long the PR of the web directories changed, with a lot going to PR0. Whether this caused a number to give up I don't know, but quite a few of the 2000 went offline. As the months have passed, a number of the domains expired and a good percentage of the directories switched to paid.

In the last few days, I've rechecked the directories, removing those which are dead or have switched to being paid. Of the original 2250, there are now about 1250 left. As of today though, all of these are free and if a submitted website gets accepted by a good proportion of the 1250 directories, the site should get a good boost in PR and performance in SERPs.

Getting more Visitors and Page Views

I've been helping a friend optimise his software archive site SoftTester. The site is nearly 5 years old and has about 100,000 pages as well as being listed in DMOZ. Over the last couple of years his site had been slowly losing visitors. By June he was down to only a few hundred a day. Needless to say, his income from Adsense had fallen away to almost nothing.

In June, we decided to do some SEO on the site. We mainly concentrated on on-page SEO and improved page titles and descriptions as well as adding good h1 and h2 tags. His site is database-driven, with most of the content coming from PAD files submitted by software authors.

We changed some of the data used to display info as well as shuffling the position of some the displayed items. Whatever we did, it seems to have paid off. Within a couple of weeks, search engines started sending more traffic to the site. In particular traffic from Google began to grow steadily.

As well as on-page optimisation, we set about getting new links to the site. One of the main ways software download sites get links is by reviewing and making awards to listed software packages. Software authors can then use a nice award graphic on their own websites and link back to the archive. The existing graphics were a bit tired, so I encouraged my friend to buy classy new ones and before long he began to get extra links to his site.

After waiting 4 or 5 months, the number of visitors and page views had grown by a factor of nearly 5 and the income from Adsense had grown along with the traffic. Not a bad result for a few hours work spread over a few days.

Monday, 10 November 2008

SEO’ing webpages using precise Keywords

A friend of mine has been trying to optimize his webpages. His site is an online shop selling jewellery. On each webpage, he's added a set of links to each product page. These links aid the user in navigating around the site and also attempt to improve SERPs performance as the anchor text for each link includes the keywords for each product page. For example, on one page he's trying to sell some Choker Jewellery, so he made Choker Jewellery the anchor text of the link to the page.

All the links and anchor text are chosen to reinforce the keywords used on the linked page. He's taken things one stage further and dynamically parsed the page description from the backend database and generated the anchor text for the links automatically. This will make it much easier to add product pages in the future and is a good example of using a database to make management of a website easier.

To give the links extra value he's added the navigation near the top of each webpage on the site. This should show google that these links are important. To make the placement of the links useful to visitors he's also added the text “Recent Searches” so the links look like phrases people have used to search for items on my site, but more importantly providing google with an important set of links.

He wasn't sure whether to have these links at the top of the page as they do look odd. However his biggest problem was deciding what keywords to use for his home page. He finally decided on Cheap Jewellery. Having developed several websites in the past and getting little traffic, he was keen to better this time and used the Adwords keyword tool to find keywords with a good expected level of traffic. He then matched the best keywords against the products on his shop site. An example would be Jewelry which is a spelling mistake, but a good keyword from a volume point of view with a good, i.e. low, level of keyword competition. This was a difficult process but he found that keywords with a good expected level of traffic aren’t necessarily the keywords people use when searching for things to buy from his site. Therefore the whole strategy is quite risky, but definitely worth trying.

Monday, 13 October 2008

Capturing an image/ thumbnail of a webpage in C#

For a while I've been figuring out how to programmatically get an image of a web page using C# and .Net. This could have a number of uses such as displaying a thumbnail of a web page. I found a number of methods by Googling but on the whole they seemed a bit lengthy. Eventually, I combined bits of a number of methods and then simplified things by trying out an alternative approach myself. One key thing I wanted to do was to create a mini picture of a website for display in a new desktop app I'm developing.

In the end I wrote a simple class with a single static method called
GrabImageOfWebPage. GrabImageOfWebPage takes a .Net WebBrowser control instance as an argument together with the required size for the captured image. The web page loaded in the WebBrowser control is captured (the entire client area of the control is captured) and shrunk/ enlarged into a bitmap of the required size. Here's the code:


using System;
using System.Collections.Generic;
using System.Text;
using System.Windows.Forms;
using Microsoft;
using mshtml;
using System.Runtime.InteropServices;
using System.Runtime.InteropServices.ComTypes;
using System.Drawing;

namespace BrowserComponents
{
///
/// Class providing a static method to return a bitmap of a web page rendered in
/// a .Net webbrowser control.
///

public class CBrowserImageGrabber
{

[ComVisible(true), ComImport()]
[GuidAttribute("0000010d-0000-0000-C000-000000000046")]
[InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)]

private interface IViewObject
{
[return: MarshalAs(UnmanagedType.I4)]
[PreserveSig]
int Draw(
//tagDVASPECT
[MarshalAs(UnmanagedType.U4)] UInt32 dwDrawAspect,
int lindex,
IntPtr pvAspect,
[In] IntPtr ptd,
//[MarshalAs(UnmanagedType.Struct)] ref DVTARGETDEVICE ptd,
IntPtr hdcTargetDev,
IntPtr hdcDraw,
[MarshalAs(UnmanagedType.Struct)] ref tagRECT lprcBounds,
[MarshalAs(UnmanagedType.Struct)] ref tagRECT lprcWBounds,
IntPtr pfnContinue,
[MarshalAs(UnmanagedType.U4)] UInt32 dwContinue);
}

public static Image GrabImageOfWebPage

(WebBrowser Browser, Size ImageSize)
{
// Get the view object of the browser
//
IViewObject VObject = Browser.Document.DomDocument as IViewObject;


if (VObject != null)
{
// Construct a bitmap as big as the required image.
//
Bitmap bmp = new Bitmap(ImageSize.Width, ImageSize.Height);


// The size of the portion of the web page to be captured.
//
mshtml.tagRECT SourceRect = new tagRECT();
SourceRect.left = 0;
SourceRect.top = 0;
SourceRect.right = Browser.Right;
SourceRect.bottom = Browser.Bottom;



// The size to render the target image. This can be used
// to shrink the image to a thumbnail.
//
mshtml.tagRECT TargetRect = new tagRECT();
TargetRect.left = 0;
TargetRect.top = 0;
TargetRect.right = ImageSize.Width;
TargetRect.bottom = ImageSize.Height;



// Draw the web page into the bitmap.
//
using (Graphics gr = Graphics.FromImage(bmp))
{
IntPtr hdc = gr.GetHdc();
int hr =
VObject.Draw((int)DVASPECT.DVASPECT_CONTENT,
(int)-1, IntPtr.Zero, IntPtr.Zero,
IntPtr.Zero, hdc, ref TargetRect, ref SourceRect,
IntPtr.Zero, (uint)0);
gr.ReleaseHdc();
}



// Return the bitmap.
//
return bmp;
}
else
{
return null;
}
}
}
}


}
To gain visibility of the types in this example, you have to include the following uses:
using mshtml;
using System.Runtime.InteropServices;
using System.Runtime.InteropServices.ComTypes;
using System.Drawing;
and you also need to add a .Net reference to your project for Microsoft.mshtml.

Using the method is then pretty easy. The example code below create a webbrowser control and loads a webpage. When the webpage is fully loaded, it grabs an image 10% the size of the original page and displays it in a picture box.

WebBrowser mWebBrowser;

public Form1()
{
InitializeComponent();

mWebBrowser = new WebBrowser();
mWebBrowser.Width = 1024;
mWebBrowser.Height = 768;
mWebBrowser.ScrollBarsEnabled = false;

mWebBrowser.DocumentCompleted +=
new WebBrowserDocumentCompletedEventHandler
(mWebBrowser_DocumentCompleted);
mWebBrowser.Navigate
(@"http://www.software-product-development.blogspot.com");
}


void mWebBrowser_DocumentCompleted(object sender,
WebBrowserDocumentCompletedEventArgs e)
{
if (mWebBrowser.ReadyState == WebBrowserReadyState.Complete)
{
Image Img =
BrowserComponents.CBrowserImageGrabber.
GrabImageOfWebPage(mWebBrowser, new Size(102, 77));

if (Img != null)
{
pictureBox1.Image = Img;
}
}
}

Monday, 6 October 2008

Losing a PageRank Value

In the mini-update toolbar export on Sept 26th this blog lost its PR, i.e the value went to N/A. Previously the PR had been 3. Why this has happened I don't know. I haven't linked to any silly sites or reduced the frequency of posting.

Today I also noticed that a few inner pages on the blog have PR. I haven't noticed this before. It's strange that the older posts have PR but the blog home page is back to PR N/A. Google Analytics isn't showing any change in the level of traffic to the blog.

Friday, 12 September 2008

How to keep on the good side of Google

When people are trying to improve their ranking in search engine results page, a lot of people will use any advice they can find in a struggle to improve rankings. People need to be wary which advice they follow though. Some advice will cause you to be penalised by Google.

If a page on your website is penalised, it will not perform as well as it might in search results. Here are some tips to help avoid penalties. In the case of a new website, it may never get to perform well in the first place.

Remember that outside Google, no-one really knows what counts as good or bad in terms of SEO. There is some general advice from Google about having good, unique content and quality backlinks. Other than that, people are justing using their experience and guesswork to find out what works and doesn't. A lot of SEO information online is copied and spread as online myths.

With this proviso in mind, here's a fairly non-controversial list of things to avoid.

  • Avoid Exchanging Links
    Excessive link exchanging should be avoided as Google may see this as an attempt to artifically improve rank. A few link exchanges will be OK but avoid large numbers. Link farms - where a large group of sites hyperlink to all other sites in the group should always be avoided.
  • Do not Sell links
    Selling links is a no-no - unless the hrefs use the nofollow attribute. If your site sells dofollow links and Google becomes aware it may well be penalised. Google's WebMasters site allows people to report paid links. Rumour has it that Google may use this information to adjust its algorithms to improve detection of paid links.
  • Do not buy links
    Recently, Google has threatened to penalise sites they discover have purchases dofollow links from another site. Logically thinking though, this does not seem possible - or at least it would be extremely unfair! If this were to be the case it would be easy to penalise a competitor by purchasing links to their site and then reporting them to Google.
  • Avoid duplicate content
    If possible, avoid duplicate content. For example, don't make the same post to two different blogs. Google will ignore copies of content. A few copies will make it into Google's index but lots of copies will be ignored. Even on different pages within a website, try to keep the textual content unique and avoid repeating whole paragraphs of text.
  • Don't stuff keywords
    If you want to perform well for a certain keyword, stuffing your webpages full of the keyword will not help. Write you copy in a natural way so that it reads well. If you are writing a web page about Google penalties (for example), the keyword "Google Penalty" will naturally appear a number of times - you don't need to repeat the phrase scores or hundred of times.
  • Don't include hidden text
    Make the contents of the webpage visible to the user. For example, don't include extra content such as white text on a white background that the user cannot see.

Sunday, 7 September 2008

Sitelinks update

Today, Google Webmasters shows an updates in my sitelinks but these don't seem to have made it to the SERPs yet. Three new links have been added to my sales, support and SliQ Submitter pages. When these links make it to the search results, the sitelinks will look much better than at present.

Sunday, 31 August 2008

Further thoughts on Google Sitelinks

Thinking about it a bit more, my feeling is that Google needed knowledge of where people went after landing on the SliQTools homepage in order to choose the sitelinks. The only way Google could have this knowledge is by using Google Analytics data. I wonder if sitelinks have ever been given to a site that did not use Analytics?

One of the search terms for which sitelinks are displayed is used by people revisiting the site to see if there is a new release. After landing on the homepage, they go to the Release History Page.

I did a rejig of the homepage a couple of weeks ago and shuffled around some of the thumbnails to make people go to a better page after the homepage, i.e. the Invoice Software page instead of the Invoices & Payments page. I think the Invoices & Payments sitelink was generated before this reshuffle. Perhaps after 90 days the sitelinks will be regenerated inline with the new site linking stucture.