Skip to content →

Month: December 2011

Rebuild eStore on WordPress in half-an-hour

In my current mood, I do not recommend the eStore plugin but if you must, pay your money and do this. I had to rebuild my store. Now I know how, it should take half-an-hour. It took my over 9 hours with back and forth to the vendors. Here’s some tips to help you understand what is involved.

So calm down, read carefully, and work carefully. 30 minutes should be ample especially with some preparation long before you get into trouble. So I will write as if you are building the eStore from the very beginning.

#1 License and commercial details

• Keep two pieces of paper – the email with the download link and the Paypal transaction.

• Keep the email address you used and the Paypal transaction number – they work as your permanent license. You need them to communicate with eStore.

• I have stored them in various places like my email, on the CD where a backup of the plugin is stored, in my diary in case I am away from my desk when it crashes – which it will it seems.

#2 Help and forum

Now sign up to the forum and change your password. Again you don’t want to do this when your shop is collapsing and you are under pressure.

#3 Supercache – not

Then download the plugin and use it. Don’t use Supercache even if your hosting service says to. And even though eStore says things like “if you are using Supercache”. They mean “please never use Supercache”.

If you are in Europe, don’t use Supercache. It stops people changing their details in your cart and shows the details to other customers. Without stopping to worry too much about it, it seems that this breaches Privacy laws horribly.

And don’t delete Supercache if you find you have it running with eStore because it eStore will break you WordPress dashboard.

You must deactivate, if not delete eStore, take down Supercache, and rebuild eStore. Horrible, huh?

#4 Rebuild eStore

Rebuilding eStore is much easier than eStore makes out. This is what you need to know.

Your WordPress site has two parts: the code is loaded up in one part which you can see using FTP. The content of your posts is loaded into a MySQL database which is accessed through phpmyAdmin.

When you back up your WordPress site, only the MySQL database is getting backed up. If you want to restore your database, you still need a skeleton WordPress site to house it. You can always rebuild a Worpress site from scratch and put back in them, and modications and plugins. So remember to back up any themes that you have bought, any child theme you have written, and to list the plugins you use and any licenses that you have like for your spam catcher. This is not stuff to leave till tomorrow. Always do it immediately.

So let’s assume you do have your MySQL backed up, and you do have the modifications to your WordPress theme backed up and notes of what is where on your website.

To rebuild eStore, you also need a good copy of their code. If you bought it recently, you have one. If it is a few months old, get the commercial details together and to their forum to hunt, and I mean hunt, for a link for automatic updates. Use the commercial details to get updated copies of the plugin.

Save the up-to-date plugins somewhere on your C drive (remembering to backup them up on CD later). Now what you are going to do is wipe out the offending plugins and write back the code for the plugins.

Use FTP or Filezilla to look at the WordPress PHP for your website and track to wp-content/plugins. Delege the wp-cart-for-digital-products and any offending caches. You can do that because only the code hangs in those foldders. The details of your shop have been stored in your MySQL database (which is backed up anyway, right?).

Now you can transfer the new plugin from your hard drive to the folder where the old eStore hung out. And all should be good.

The key is to be clear where everything lives and that the details of your shop are in MySQL and the code for the plugin is in what you see in FTP (your theme is also there). The only shop assets you can see in FTP are under wp-content/uploads. If you are selling digital goods, that’s where they are. All the tetchy little details of the shop and who bought what are in the MySQL database with your posts, comments, users etc.

I hope this helps. Rebuilding eStore should take about half-an-hour. It took me 9 hours. It needn’t.

CHECK OUT SIMILAR POSTS

Leave a Comment

Coding in schools? Take the splinter from our eye perhaps?

As I wait for FTP to download a website from a server onto my laptop, I thought I would write a bit.

I was up late last night, probably unwisely, as I tried to fix odd errors on my online shop. One error led to another.  I left a message for my US suppliers went to bed and got up in the morning to the usual geek-like rude reply – read this link.

Well my response is

a)      Why wasn’t that pointed out to me earlier

b)      When I put that advice in plain words, your product does not work under the conditions you said it would work. You did warn us by saying “if you use  . . .”.  But the truth is that you should have said “We do not recommend using  . . . If you do use our product with this other product, here are the 5 things you must get right.”

Anyway, I deleted the 2nd product and guess what – their product now misses the 2nd product and has frozen my online shop.  Hence downloading a copy of what is left onto another compute for safekeeping before I fiddle any further.

So why is this important?

  1.  Don’t do your computing when you are tired and don’t try to read Geek-English when you are tired and stressed.  They do their computing and writing when they are tired and stressed and it shows.
  2. There is a big debate going on in UK about teaching coding in schools.  I scoff at this debate. It began with Eric Schmidt of Google, teasing the UK government about teaching word processing in schools (i.e., using Microsoft).  The geeks of UK have fallen for this line and now think we should teach the average teenager how to write the next package. Hmm. .  it will be as a bad as the one that I am using and why anyway should we trade in Microsoft for Google.  For all our frustrations with Office, it is much more stable than any Google product.
  3. But yes, we should all use computers a lot more. I bought some software because I thought my predilection for writing code from scratch was ill-served.  Get working ecommerce software and use it! Bad idea. Bad idea to rely on geeks. Much better to know every corner of your code yourself.

But of course we cannot know everything.  But yes.  Using other people’s code is like signing a document without reading it. We do it – often.  We shouldn’t.  We should streamline our lives to have two boxes:

 # Box 1 : Not very important to me

Things in this box are not important to me.  So I can afford to sign bits of paper or use other people’s code or eat food of unknown origin or sleep with someone who seems to sleep elsewhere too – you get my drift.

If it doesn’t matter, put it in this box.

# Box 2: Very important to me

In this box are the things I care about.  So I should tend to them carefully and learn about them deeply.

As I can’t do everything, I should be very selective about what goes in this box.

I have to be careful about leaving things out too which do impinge upon me or would enrich me enormously.  So what is important must go in and what goes in must be looked after.

What kids should learn at school

That’s what kids should learn at school. To do their work well.  Not to spend time on things they don’t care about.  And not to complain if things they ignored turned out to be important.

Of course, when they are small, they can’t understand this completely or understand enough about anything.  So we grow their world for them slowly, helping them to push back their horizons, bit by bit, as they can absorb more and attend to it with the same care as things already in their world.

To live in narrow world is not grown up. We might even argue that it is to be ‘not of sound mind’.  But to suggest we should code at school.  . . that’s as half-baked as the code I stupidly bought.

Teachers know a bit about helping kids to grow

Kids must go to school and expand slowly from the world they are in to a bigger world. Teachers have some idea of the average pace that kids can work at.   And they know quite a bit about managing an environment where kids can grow steadily in a safe environment.

How can we help schools?

If we think there is something in our world that teachers might like to see, then I think we should invite them in.

Hold bar camps for teachers to have a lovely relaxing weekend in their hols with good food and pizza and geeks with blazing eyes excited by their weekend challenge.

We can accept problems teachers identify with software and work on some improvements.

A splinter out of our own eye?

But it is not kids who need fixing. It is not schools who need fixing.

It is the geeky world of very bad software and very rude help desks. N’est-ce pas? And TG for Google Translate so I could check my spelling.

Getting with the program

Oh, btw, did you see National University of Singapore have built an iPhone app that translates Mandarin speech into spoken English.  . . might make me lazy about learning Mandarin.

Anyway, Stanford watch out.  Asia is hot on your vapour trail.

And where is UK . . . making key apps?  . . . making life better? . . . yes Eric Schmidt is right.  We are a nation of chatterers preferring to use a word processor than to build one.

We should build the companies and businesses that take on NUS and IIT – Kids will work that out fast enough and set that as their new horizon.  Something for kids to look forward to  . . . they don’t need to aspire to aping US entrepreneurs for 20 years ago or 40 years ago.

And if you don’t know what NUS or IIT stands for, of course Stanford’s former post-grad students can solve the puzzle for you.

So ends my self-entertainment – but my FTP download is still not done.  Is it stuck in a loop?  Dearie me. Do I have to understand it’s code too?  Well don’t be taken in by geeks.  The first thing I learned as a CS student is that code is arbitrary.  The problem is usually the comma you didn’t know was supposed to be there.  Now to search Google for ‘looping FTP’. Logic will not fix this. Nor common sense. Someone has seen it before – or not.

CHECK OUT SIMILAR POSTS

Leave a Comment

Down-to-earth principal components analysis in 5 steps

This post is a step-by-step, practical, guide to principal components analysis.  It’s very hands-on and “common sensical”.  If any experts out there spot an egregious error that would horribly mislead a beginner, please do let me know.

I’ll simply work through 4 steps and then sum up as 5 steps in a slightly different order.

#1 Data

I always like to start any data problem by thinking about my data rather concretely.

In a PCA problem, imagine a spreadsheet which has hundreds or even thousands of columns – far too many for comfort.

On each row of the spreadsheet is a case, or ‘training example’ in machine learning parlance.

What we want to do is to find the columns that matter.  Alternatively, we ask “which columns could we bundle together into computed columns so that we have a more manageable number?”

In short, this procedure will tell us which columns we can leave out and which one’s we should bundle together.  The bundles will be the principal components.  And the final answer will tell us how much precision we have lost by bundling scores rather than using original raw data.

So, we begin with data which is laid out in matrix X with m rows and n columns (m x n).

#2 Normalize the data

The data comes in all sizes – little numbers and big numbers, very spread out and bunched together.  First we smooth the data in the same way that tests are normed at college.  Simply, we convert each column to a mean of zero and a standard deviation of one.

To be very clear how this works, we take each cell and adjust the number in the cell depending on the other numbers in the column. If we were working with spreadsheets, we would open another spreadsheet of exactly the same number of rows and columns and add this formula to each cell. So for cell A1 we would have:

= (Sheet1!A1 – mean(ColumnA))/StdDev(ColA)

When we calculate the mean and stddev of the columns in the new spreadsheet, they will all be 0 and 1 respectively.

#3 Principal Component Analysis

Now we find the ‘bundles’ of columns.

In my old days of statistics packages, the program would return a table which listed all the columns down the page and then produced factor loadings or weights for a whole heap of factors (or bundles) in more columns.  The number and the sign would tell you the weight of the original data column in the new ‘bundle’.  The weights varied from -1 through 0 to +1.

In Octave, the free version of Matlab, there is a facility to do PCA in two steps:

Step #3 Part One

  • Compute what is called the covariance matrix.  Simple imagine taking  a copy of the spreadsheet (the second one), multiplying it cell to cell (A1 to A1) and taking the sum of those squares in the new column A, then A1 to B1 and taking the sum of the product in column B, then A1 to C1.. etc until we have new row with N columns each got by multiplying two columns and adding up the product. You’ll have to try it yourself.  I’ll have to get out pen and paper when I read this a year from now.
  • Then we do the same starting with Col B and Col A (that’s a repeat, I know.. stick it in), B to B, B to C  etc.
  • Until we have a new matrix with N columns and N rows.  (Yes – this is what computers are for).
  • And one more sub- step – divide every cell by the original number of cases or training examples (i.e., rows in the very first spreadsheet).

That’s the covariance matrix.  In Octave, which uses linear algebra, it is much easier.  You just tell the machine to multiply the transpose of the normalized data by the normalized data and divide by m – one line of code.

CovarianceMatrix = (X’ * X )/m

(That’s what computers are for!.. the explanation was just so you have a concrete idea of where the data came from and what happened to it).

Step #3 Part One

The second step in PCA is to find the bundles using a function that is built into Octave called the ‘singular value decomposition’ or SVD.

All you do is ask for it and it ‘returns’ three matrices, U, S and V and we are going to use U and S.

U gives us a matrix exactly the same size as the covariance matrix.  Each column now refers to a ‘bundle’. The rows still refer as before to the features (that is the original columns in the data matrix and the normalized data matrix. Have a quick check.  I’ll wait!).

Note we have as many bundles as we had columns right at the start but we don’t want to use all the unbundles (columns in the U matrix) otherwise we will have exactly the same number of columns as when we started – no point, hey?

So we will only use as many, starting from the left, as we need.  We will decide how many to use on the basis of the S matrix.

Let’s do that later and  consider further what U actually tells us.  As we read down column one, cell A1 tells us the loading of original column A, or the 1st feature, on the new bundle 1.  Cell A2 tells us the loading of original column B or the 2nd feature, on new bundle 1. And so on.  Got it?  Try it out.

So what can we do with this?  Two things –

  • We can see which of our original columns were the most important.  They are the ones with the biggest numbers in column on the left and subsequent columns as you move right.   A positive number means the higher the original number the higher would be the bundle score. A negative number in this new table means the higher the number in the original table, the lower would be the bundle score.  Good to know if two of our original columns pull in  the opposite directions. So that is the first use – to understand the original columns and how they hang together.
  • The second use is to create a simplified data set.  Ok, we hate it when bureaucrats create scores for us – like a credit rating. But imagine the rows are just pictures and the columns were the pixels or colors at 10 000 points on page – collapsing 10 000 columns into 1000 columns or 100 columns could be handy for data compression.  (We’ll work out later how much data is lost – or how much blur is added!)  So how do we recreate the scores?  We will come back to this – lets stick with understanding what those numbers in the U matrix mean. All we have to do to get a score for the bundle  is take the number in the U matrix for original column A (now in row 1) and multiple it by the score for the first case in column A  (do it on a bit of paper).  Do that for the whole row for the case times the whole column in U (row of original data times column in the U matrix), add it up, and we get a ‘bundle’ score for the first case.  That will go in cell A1 in a new table. The cell is the score for case 1 on bundle 1.
  • Then we can do the same for the second case, then the third.  And we will make a new column of bundled scores for the whole data set.
  • Then we do the same for the second bundle (it’s OK – that’s what computers are for).
  • Finally we have a matrix with as many rows as we have cases or training examples and as many columns as we have new bundles.  This can be useful when we need compressed data and we don’t mind a bit of blur in the data.

#4 How many bundles do we need?

So now we come back to how many bundles do we need?  Well firstly, a lot fewer than the number of columns that we started with. That’s the whole idea of this exercise – to get that original spreadsheet a lot, lot smaller.

I mentioned before that we use the data for the second matrix, S, that is churned out by the SVD function in Octave, to work out how many bundles to keep.

This S matrix is also exactly the same size as the covariance matrix which was square with the same number of rows and columns as we had columns in the first, first, first data table.

This time, though, we only have data in the diagonal from top left to bottom right.  Every other cell is zero.  So that means there is a number for original row 1 and column A; row 2 and column B; etc.  Gee, couldn’t we have a column?  Yes we could. It’s laid out this way to do with the way machines do arithmetic. It is easier for the machine to pull out the matching diagonal from the U matrix for example.  But that’s not our problem right now.  We just want to know how to use these numbers to work out how many bundles to keep.

Well, these numbers represent how much variance is explained by the bundle.  The very first number (top left) tells us how much variance in the whole original data set is explained by the new bundle.  To work out what % of variance is accounted for by this bundle, we take all the numbers on the diagonal and add them up to give us a number representing all the variance in the whole data set.  Then we take the number for the first bundle (top left) and work it out as a percentage of the whole lot. If the percentage is less than 99% (.99), then we add another bundle (well we add the percentage of another bundle or we add the numbers of the two bundles and divide by sum for all the numbers).  We just keep going until we have enough bundles to account for 99% of the original variance.  (So in plain terms, we have allowed for 1% of blurring).

Oddly, only 1% of blurring might allow us to lose a lot of columns.  Or more precisely, when we compute new scores, one for each bundle in the final solution, we might have far fewer bundles than original number of columns, but still account for 99% of the original amount of detail.

 That’s it…that’s PCA.

#1 Get your data in a spread sheet (cases on rows, features in columns)

#2 Normalize the data so each column has a mean of 0 and an SD of 1

#3 Use a built-in function to return a matrix of eigenvectors (U) and variance (S)

#4 Decide how many ‘bundles’ of features to keep in (account for 99% of variance)

#5 Compute new scores – one score for each case for each bundle (now the columns)

And what do we do with this?

#1 We understand how columns hang together and what we can drop and only lose 1% of detail (or add 1% of blur)

#2 We can use the new scores to do other things like feed them into a prediction program or supervised learning program. The advantage is not to improve on prediction, btw,  but to cut down on computing costs.

That’s all!

CHECK OUT SIMILAR POSTS

3 Comments

%d bloggers like this: