Hosting Flask on Webfaction

Ik host nogal wat websites via Webfaction, die je toelaten om voor een vaste prijs per maand te kunnen experimenteren met Python, PHP, nodejs en nog andere programmeer talen.

Als je ziet wat je allemaal met je account kan (mag) doen, dan is het eigenlijk een gunstprijs.

Recent moest ik een Flask applicatie die ik geschreven had opladen. Configuratie op de server was toch nog eventjes anders dan op mijn localhost. Daar had ik alles bijeen gegooid, en gebruikte ik de ingebouwde Flask server om het snel te testen.

Maar op de server staat er een echte front-end webserver voor, Nginx of Apache, want een debug Flask server is echt voor het testen, niet voor productie omgevingen.

Die webserver heeft natuurlijk een andere configuratie, en daarom moet je eigenlijk de app die schrijft importeren vanuit index.py.

Wat links die me verder geholpen hebben :

  • Deploying A Flask App on Webfaction : vrij compleet en logisch opgezet, hoewel ik geen Virtual Environment setup gebruikte, en ook geen link naar mijn static folder moest toevoegen in de Apache config (mét werkte het gewoon niet meer). Ik had wat last met het begrijpen van de commando’s in puntje zes van zijn lijst, maar dat is omdat ik module imports niet veel gebruik, en dus ook niet helemaal onder de knie heb – nu al wat meer 🙂
  • Een oudere versie maar gelijkaardige manier van setup, meer kompleet maar minder transparant, kan je vinden op het Webfaction community forum : Installing Flask on Webfaction.

Wat ik het moeilijkste vond was de import van de app. Je importeert vanuit module x (waar x eigenlijk de folder is waarin je __init__.py staat) de app die je beschrijft in diezelfde __init__.py. Daar heb ik toch wel efkes op gevloekt tot het werkte.

Even zitten spelen met een foto van Kaai16

Wat zegt dat eigenlijk over u, als je in plaats van Watch_Dogs te gaan spelen zoals je dacht te doen, je je een uurtje of 2,3 bezig houdt met 1 foto, met nog wat vegas javascript, om opeens te beseffen dat je eigenlijk een responsive mini-website hebt gemaakt ?

Gedreven ? Zot ? Van het goede teveel ?

Ik heb er in ieder geval veel plezier aan gehad, aan dit prutsen:

test site

 

Je kan het zelf (eventjes) gaan ontdekken op http://demo.eventconnect.be

 

Nieuwe ontwikkeling gedaan : visitekaartje voor de kartonfabriek Henri Goossens

screen-capture-2-585x400

Een nieuwe website gemaakt voor de Henri Goossens kartonfabriek. Deze website dient als visitekaartje op het internet zodat klanten die hen zoeken hen sneller kunnen localiseren en contacteren.

Deze html5 site is volledig responsive, en toont zich dus ook correct in eender welke mobile browser.

In de design werd vooral gewerkt met afgeronde hoeken via CSS3 die alle huidige browsers kunnen tonen. IE7 toont deze site nog ‘hoekig’.

Klanten kunnen zowel via de email link hen contacteren (voor diegenen die een lokale email client geinstalleerd hebben) als via het contact fomulier waar ze een boodschap kunnen nalaten via een modal dialog box die te voorschijn.

Gemaakt met bootstrap, jquery, en php voor de verwerking van het contactformulier.

FR en NL pagina’s zijn voorzien.

Cartonnerie.be linkt naar dezelfde hosting account, maar geeft de franstalige content weer via htaccess rewrite rules.

Twitalytics Update, December 2010 (1)

First of december's overview of tweets about #LeWeb

Some more updates and changes for twitalytics :

In general:

  • I did some more styling changes in the css, hopefully you’ll find them pleasing or at least ok.

Keyword Page:

  • The “Today” button on the keyword page did not work correctly – this is now fixed.
  • Instead of the “All” button, which lost it usefullness some time ago when keywords started having more than a few hundreds of tweets, I’ve added a “Yesterday” button. This means you can go back into the database day-by-day. I’m wondering if I need to add a graph for each days activity (I think yes).
  • The layout of the tweets in the table below the graphs has been rearranged : before you could only order the tweets on language, now you can also order on authorname.
  • Behind the authorname is sometimes a (+). This means that there is information about this author in the database, and clicking on the authorname will show you a dialog box with info on the user and the keywords he is tweeting about (if more than one). There’s also a link to his twitter profile. If there is no (+) you are directly linked to this online twitter profile.
  • Tweets are by default shown chronologically – the date has been moved to the far right column, in small font so you can still verify this.
  • The tweets are now presented with a bit more whitespace around them.

Adding A New Keyword:

  • When you add a new keyword, and it doesn’t exist already, a first update is made immediately for that keyword – you no longer have to wait until the hour strikes before seeing results !
  • todo: rework the results page when searching for a keyword : it is still my very first attempt of a page, and looks hideous to me now.

My Profile :

  • this now only contains your profile information
  • to be added :  a way to update your profile information with a new email, etc. (this might take a while!)

Reports :

  • this now contains the statistics about the keyword and the users tweeting about them
  • it shows keyword + language + number of users tweeting about this keyword

In the backend I’ve also now started using the mailer python module, which is a wrapper that allows you to send mail easier – instead of trying to suss out how smtplib expects it’s parameters for each type of mail, I can just supply them to mailer, and it does the rest ! This’ll open up some new possibilities of alerting in the future.

For example, an e-mail alert when the average threshold has been reached or bypassed on a certain day would be interesting !

Twitalytics Update, November 2010 (2)

I just did an update to Twitalytics (or Twita for short).

What’s changed:

  • Redid the search screen and added an autocomplete (jquery-ui) to the inputfield when you type something in the search field. The drop-down will show you the list of similar keyword queries other users are running.
  • Rearranged the search screen so it’s more intuitive to select languages
  • A first relook at the results that are shown after entering the search words: ugh! I need to redo this screen ! Tweaked the buttons already.
  • Added some spiffy graphic icons to the keyword overview table ! The icon set is called ‘flavour-extended‘ by Olivier Twardowski and I found them via a post in Smashing Magazine.
  • I tweaked the header and the footer section so that they take up less space.
  • The keyword table which currently resides under “My Profile” now also shows the languages you selected.

I’m starting to notice some slowness entering the system – I’ll need to optimize my queries, I am thinking about moving some of those which are tweep-related (twitterers, users who send tweets) to another cron-job so it is done once-a-day.

That’s it for now !

Now per Keyword: Today Chart (and taking into account your languages)

Twitalytics has been updated some more:

  • For each keyword you now have a ‘Today” chart that shows you the tweets of today (normalised per hour)
  • The “Today” chart only shows the tweets per hour for your language selection (the 60 day chart counts all tweets in all languages)
  • In your “MyProfile” page you can see your list of keywords with nr of tweets and now also with nr of users per keyword.

I expect that the list of keywords/tweets/users will move to the “Reports section” in the future, with additional graphs (pie charts, hmmm !).

Twitter Status Id’s to change to string representation

I almost missed this post on the new status id generator for twitter:

Timeline
———–
by 22nd October 2010 (Friday): String versions of ID numbers will start appearing in the API responses
4th November 2010 (Thursday) : Snowflake will be turned on but at ~41bit length
26th November 2010 (Friday) : Status IDs will break 53bits in length and cease being usable as Integers in Javascript based languages

I just checked my tables – Oef ! On first look, I already store my status id’s as text… 🙂

Plus of course I’m using Python, not Javascript… but it never hurts to check.

Trendlines for JQPlot are working !

At last I’ve got trendlines working ! And wouldn’t you just know it, it was just a small change that needed to be done, but the documentation is a bit old for jqplot it seems.

Mind you, I completely understand – if I understand correctly, this is just one programmer who is creating and maintaining this. His project is quite a lot bigger than mine, and I already have trouble following up and dealing with all the bugs my users find in a timely manner, let alone implement the (welcome!) suggestions on how to improve the site.

Anyway, for the record: to get trendlines to work in jqplot, you need to:

  • include the plugin in your code
  • set somewhere in your javascript part the line $.jqplot.config.enablePlugins = true;

And that’s it ! Any further configuration can be done inside the series section of the config.

Changing from feedparser.py to urllib & simplejson

With all the outages of Twitter recently, my back end system that retrieves the information from twitter was going haywire. Things kept going wrong, tweets were not retrieved, the works.

I initially coded this backend using feedparser, thinking that this code could be user later on to get rss feeds from other sites. It was a mistake to do so – I made that decision thinking that information via atom format or via json format would be similar, but this was not correct. I am *not* saying that feedparser is not good, it’s just not the right tool for the job it has to do !

The atom format that twitter returns (at current date of course – this might very well be fixed later on) is really a hodgepodge of information that is prodded and shaped into the atom format. Lots of info is repeated because really the atom format was made for larger articles of text that need a title intro, a body, etc.

All this means a much larger filesize return – certainly not enormous, but in the long run this adds up in data traffic.

Not all info that you get in json is correct in atom either. Iso_language_code which indicates the language the author primarily uses, was/is set to en-US all the time via atom format.

So with all those outages and checking and finding out that most json queries still returned correct results, I removed the feedparser lines and am now using urllib and simplejson to retrieve and parse the twitter data. Took me about 3 late evenings in a row to work through (I have a full time day job, so I only have time to do this on the train and in the evenings after 9 pm) but it’s running (almost) smooth now.

Still need to weed out a bug in my code, though – the last search does not seem to be processed… grrr.