- first install pip
sudo apt-get install python-pip
- install markdown2
pip install markdown2 - for me this had already run 'python setup.py install'
- if it doesn't for you then run manually
- test at command line - should be available
markdown2 - now try all together - assumes firefox is on your path
markdown2 README.MD > README.html; firefox README.html - add as a shell alias - modify ~/.bashrc and add
function mark() { markdown2 "$@" > "$@".html; firefox "$@".html }
- now try
source ~/.bashrc mark README.MD
Friday, August 16, 2013
Viewing Markdown files in Firefox on Linux
There are Firefox plugins for viewing Markdown files - but neither of the big ones - Markdown Editor and Markdown Viewer - worked at all on my linux distro (Mint 15), even with the mimetypes.rdf workaround.
There are of course editors like UberWriter that will do some special formatting of MD files, but none I have found really track well to the actual HTML produced by the browser plugins. Since many of my colleagues use Firefox on Windows where the plugins work, I wanted to know what they would look like.
In desperation I hacked a quick and dirty solution using a Python translator, markdown2.
Tuesday, June 25, 2013
PKI Security Cheat Sheet
This is a work in progress.
----------------------------------------------------------------------
Using OpenSSL - most common activities
----------------------------------------------------------------------
Generally used for X509 artifacts, i.e. the more open standard.
Dump X509 certificate(CRT) content - assumes PEM format
openssl x509 -in certificate.crt -text -noout
Dump X509 certificate(CRT) content - specify input format, PEM/DER
openssl x509 -inform DER -in site.crt
NB:Try changing the format on error: "Expecting: TRUSTED CERTIFICATE"
Dump a pkcs12 user identity certificate
openssl pkcs12 -info -in keyStore.p12
Dump private key content
openssl rsa -in host.key -text
----------------------------------------------------------------------
Using OpenSSL - creating and modifying keys
----------------------------------------------------------------------
Create a private key
openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key
----------------------------------------------------------------------
Using keytool - most common activities
----------------------------------------------------------------------
Generally used for working with Java keystore(JKS) files.
List contents of a JKS
keytool -list -v -keystore keystore.jks
Dump a cert
keytool -printcert -v -file host.crt
Export a cert from a JKS for given alias
keytool -export -alias sitename -file sitename.crt -keystore keystore.jks
List default JVM CA certs
keytool -list -v -keystore $jAVA_HOME/jre/lib/security/cacerts
----------------------------------------------------------------------
Debugging an SSL Connection
----------------------------------------------------------------------
You are trying to set up a Java webserver fronting SSL and having issues.
Test the connection using openSSL to see what SSL it supports
openssl s_client -connect mysite.com:443
Enable SSL debug
Add the following to the JVM startup command:
-Djavax.net.debug=[ssl|all]
and see this to understand the output.
This will often lead you to the cause of the connection issues.
----------------------------------------------------------------------
Resources
----------------------------------------------------------------------
----------------------------------------------------------------------
Using OpenSSL - most common activities
----------------------------------------------------------------------
Generally used for X509 artifacts, i.e. the more open standard.
Dump X509 certificate(CRT) content - assumes PEM format
openssl x509 -in certificate.crt -text -noout
Dump X509 certificate(CRT) content - specify input format, PEM/DER
openssl x509 -inform DER -in site.crt
NB:Try changing the format on error: "Expecting: TRUSTED CERTIFICATE"
Dump a pkcs12 user identity certificate
openssl pkcs12 -info -in keyStore.p12
Dump private key content
openssl rsa -in host.key -text
----------------------------------------------------------------------
Using OpenSSL - creating and modifying keys
----------------------------------------------------------------------
Create a private key
openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key
----------------------------------------------------------------------
Using keytool - most common activities
----------------------------------------------------------------------
Generally used for working with Java keystore(JKS) files.
List contents of a JKS
keytool -list -v -keystore keystore.jks
Dump a cert
keytool -printcert -v -file host.crt
Export a cert from a JKS for given alias
keytool -export -alias sitename -file sitename.crt -keystore keystore.jks
List default JVM CA certs
keytool -list -v -keystore $jAVA_HOME/jre/lib/security/cacerts
----------------------------------------------------------------------
Debugging an SSL Connection
----------------------------------------------------------------------
You are trying to set up a Java webserver fronting SSL and having issues.
Test the connection using openSSL to see what SSL it supports
openssl s_client -connect mysite.com:443
Enable SSL debug
Add the following to the JVM startup command:
-Djavax.net.debug=[ssl|all]
and see this to understand the output.
This will often lead you to the cause of the connection issues.
----------------------------------------------------------------------
Resources
----------------------------------------------------------------------
Tuesday, April 16, 2013
Citrix on Linux
When I first starting using Linux, one serious issue I had was connecting to my clients' desktops using Citrix - aka the Xen Desktop.
After a while I found a solution. This is how I have to interact with a Citrix session from my Linux distro (Mint 14). Relevant for v12.1.0 of the icaclient for Linux.
Installing
After a while I found a solution. This is how I have to interact with a Citrix session from my Linux distro (Mint 14). Relevant for v12.1.0 of the icaclient for Linux.
Installing
- download the Receiver (icaclient) at http://www.citrix.com/downloads/citrix-receiver/receivers-by-platform/receiver-for-linux-121.html
- choose the .deb version for Debian/dpkg install
- attempt install - it will fail with a problem in the postinst script
- uname -m architecture is not expected
- apply this workaround:
- found fix here: http://forums.citrix.com/thread.jspa?threadID=306353&tstart=0
- modify: /var/lib/dpkg/info/icaclient.postinst
- Replace the line that says
echo $Arch|grep "i[0-9]86" >/dev/null
with
echo $Arch|grep -E "i[0-9]86|x86_64" >/dev/null1 - Run dpkg --configure icaclient
- complete installation
- access to Citrix connections from Firefox should work now Running
- set desktop Panel to Auto-Hide (right-click on taskbar, Panel Settings)
- due to BUG: which offsets mouse position by your taskbar width
- connect to Citrix and launch a desktop
- if it launches maximized, unmaximize
- due to BUG: running maximized can cause latency in keyboard strokes
- if right-click on titlebar shows 2 context menus (instead of one), minimize then re-store the window
- (attempt to) move the window by dragging the titlebar over to the right a little bit
- now resize the window frame by pulling out on the right side
- should be ready
Saturday, March 23, 2013
Git Cheat Sheet
Your Best Reference Pro Git -- download for PDF, ePub or Mobi formats here http://git-scm.com/book
Tip: Use Calibre's built-in server to push the ePub to your smartphone.
----------------------------------------------------------------------
Basic Git - Startup
----------------------------------------------------------------------
Basic Git Architecture
- Decentralized - everybody's copy of a project is a full database copy
- versions not stored as deltas - instead they are stored as complete instances of the files
- all content is compressed in the database
- new file => Untracked --> files unknown to Git
- 'git add' => Staged --> ready for commit, file or version not yet in database
- 'get commit' => Tracked --> file/version tracked in the database
Configuring Your Global Identity - Do this when first installing Git.
Set name
> git config --global user.name "Your Name"
Set email address
> git config --global user.email you@example.com
Create New Project
Create source directory. Create as a Git repo via:
> git init
Unlike SVN/CVS/VSS, the repo lives in your project directory. It's OK.
Ignore Files with .gitignore
- Put .gitignore at the project top level directory
- Add a line per filter; example:
# this is a comment *.class logs/
- this example excludes all class files, and the logs directory
----------------------------------------------------------------------
Basic Git - Typical Workflow
----------------------------------------------------------------------
See what has Changed
> git status
Commit Tracked, Modified Files
> git add -u
> git commit -m "notes about the commit"
Show changes to tracked files that are NOT staged
> git diff
Show changes to tracked files that ARE staged for commit
> git diff --staged (or --cached, an alias)
Add an Untracked File
> git add <filename>
Revert changes to a tracked file:
> git checkout -- <filename>
Revert a Staged File
It got added to stage area, but you don't want to commit it yet
> git reset HEAD <filename>
Revert all Staged Files
Unstages all from the staging area - maybe you ran 'git add .' and .gitignore did not filter properly
> git reset HEAD
Temporarily Switch to Another Branch
Got some changes not ready for committing, but need to pop to another branch - stash the current work
> git stash
Switch to another branch (git checkout)
Then come back and reapply latest stashed changes
> git stash apply
Or see list of stashes
> git stash list
And apply a named stash
> get stash apply stash@{1}
Pull Updates from a Remote Repo
Typical - when you're working with a team
- the rebase keeps the history cleaner by moving your local commits to after the new merges
> git pull --rebase
----------------------------------------------------------------------
Basic Git - Details
----------------------------------------------------------------------
Add a New File
create a README.md - this stages it
> git add README.md
and commit
> git commit -m "initial" README.md
Commit All Changed Files
Stage any changed files
> git add .
See what is staged
> git status
Commit
> git commit -m "your message"
Do all at once
> git add -A && git commit -m "your message"
Or use -a to skip staging (the 'git add' part)
> git -a -m "message"
Revert a Staged File
It got added to stage area, but you don't want to commit it yet
> git reset HEAD <filename>
Revert all Staged Files
Unstages all from the staging area
> git reset HEAD
Revert all Changes on the Branch
Don't like where the branch is going? Undo all changed files
> git checkout -f
Revert changes to a tracked file:
> git checkout -- <filename>
Revert a commit
There are myriad solutions, see here
http://stackoverflow.com/questions/927358/how-to-undo-the-last-git-commit
See What is Modified
Show any modified or untracked files
> git status
Show changes to tracked files that are NOT staged
> git diff
Show changes to tracked files that ARE staged for commit
> git diff --staged (or --cached, an alias)
Committing a File
Check for annoying whitespace changes
> git diff --check
See what changed - shows patch diffs
> git log -p filename
Commit and enter comment editor
> git commit README.md Conventional commit comment:
- 50-char summary, followed by...
- blank line, followed by...
- detailed description of change
See all changes, ever
> git log
And for a certain file
> git log -2 filename
Just the last 2 commits
> git log -2
Last commit with patch diffs
> git log -p -1
Commits since a certain date
> git log --since 1.week
> git log --since 10.days
> git log --since 2013-02-03
Commits in date range
> git log --since 2.weeks --until 1.week
Commits by committer (modifier of file)
> git log --committer username
See 'gitk' for a visual git log UI
Changing Last Commit
Add some files to the last commit. Adds whatever is staged:
> git commit --amend
Change the commit comments - assumes nothing is staged:
> git commit --amend -m "new message"
Deleting Committed Files
Just a single file, locally and from repo
> git rm filename.txt
A directory of files, locally and from repo
> git -r rm dirName
A directory of files, but ONLY from the repo, not local copies
> git -r --cached rm dirName
A file that was already staged:
> git -f filename.txt
Requires commit afterwards.
Renaming a File
Not explicitly supported internally, but calculated; and a convenience function:
> git mv oldname.txt newname.txt
Requires commit afterwards.
----------------------------------------------------------------------
Tagging
----------------------------------------------------------------------
Listing existing Tags
> git tag
Using wildcards to find tags
> git tag -l 'v1.2*'
Using a Tag
> git checkout
Creating a lightweight Tag
Example for v1.0; lightweight tags are just pointer sets
> git tag v1.0
Creating an Annotated Tag
These are checksummed and contain annotation, and optional signature
> git tag -a v1.0 -m 'release 1.0'
Creating a Signed Tag
Must have a private key installed
> git tag -s v1.0 -m 'release 1.0'
Tagging After the Fact
Forgot to tag? No matter, find the relevant commit
> git log --pretty=oneline
And tag using the checksum (first 6 or 7 characters of the checksum)
> git tag -a v1.2 9fceb02
And verify
> git show v1.2
Sharing Tags with Remotes
Tags must be pushed out like branches are
> git push origin v1.5
Or, all at once
> git push origin --tags
----------------------------------------------------------------------
Branching and Merging
----------------------------------------------------------------------
Show Current Branch
Currently checked-out branch
> git branch
List All Existing Branches
Show all branches - star is currently checked out
> git branch -v (verbose gives info on last commit)
Creating a Branch
Create new branch based on some other branch
> git checkout -b new-branch existing-branch
Create new local branch from current, and immediately check it out
> git checkout -b newbranch
Create new local branch from the remote master branch, and immediately check it out
> git checkout -b newbranch origin/master
Merge one local branch into another
This merges feature into master
> git checkout master
> git merge feature
Merge branch from remote repo into local repo
First update local copy of remote
> git fetch origin
Look at changes to remote branch
> git log origin/featureA ^featureA (not sure what the ^ is)
Merge into local branch (checkout first if necessary)
> git merge origin/featureA
Deleting a Branch
Must not be your current branch, and must not have outstanding changes
> git branch -d >branch<
----------------------------------------------------------------------
Creating a Repository on GitHub
----------------------------------------------------------------------
You created a project and want to post it to share with colleagues.
Locally you did this:
-
ir="ltr">create local Git repo
- Do work
- Commit files
- Create Repo (there is a magic button)
- Note the new project URL (ending in .git)
> git remote add origin <new git URL, ends in .git>
> git remote -v (examine your remotes)
> git push -u origin master (or whatever branch name you're working in
----------------------------------------------------------------------
Working with Remote Repositories
----------------------------------------------------------------------
A Remote is an alias to a remote repository.
Show List of Remotes
See list of known remotes
> git remote -v
Add alias to a remote repo - alias is often 'origin' by convention
> git remote add <alias> [root project URL ending in .git]
Uploading to a Remote
Upload branch contents to a named remote (origin)
Uploading a Branch to Someone Else's Repo
When branch name is the same
> git -u push <alias> <branch-name>
...example:
> git push -u origin master
When branch name is different on remote
> git -u push <alias> <local-branch-name>:<remote-branch-name>
...example:
> git -u push origin featureX:patch331
The -u option sets up a upstream branch - i.e. maps local to remote branch.
Getting changes from Remote Repo
Doing this will do a 'fetch' followed by a 'merge' - i.e. get you up to date
Pull will always merge into the current branch; specify which branch to pull from:
> git pull
Fetch is less damaging - not sure yet how to use effectively
> get fetch
----------------------------------------------------------------------
Working with GitHub Repos
----------------------------------------------------------------------
There's a project you want to get some changes into. Do this.
1. Look at the project's GitHub page; see 'Network' and 'Issues' tags.
Make sure someone else isn't already doing what you wanted to do.
2. On the GitHub page, press 'Fork' to create your own repo.
3. Clone the fork locally using its new URL
> git clone [url of my fork] <my local dir>
5. Add the fork Repo as a remote
> git remote add origin <fork URL ending in .git>
6. Add the orginal Repo as a remote for easy updating (to stay sync'd)
> git remote add upstream <original project URL ending in .git>
Contributing to Projects - Go with the Flow
Different projects may have different workflows - find out by reading the project README.
This can vary based on project size and organizer preference.
Simple Workflow Example
A simple contribution workflow looks like this:
- developer forks project
- clones fork locally ( steps 1-6 above)
- does work in topic branches - not master!
- pushes topic branches up to fork repo
- submits pull request to original project via GitHub
1. Do initial Fork setup, steps 1-6 above
2. Create a Topic Branch
This is a branch for doing local work in
> git checkout -b new-feature origin/master
3. Keep Local work in synch
Synch with the original project - first get all its branches and updates
> git fetch upstream
Merge its change into your working branch - where 'master' is the remote branch to merge in
> git merge upstream/master
4. Do work
Do work in the topic branch ('new-feature' above) as usual
Stage and Commit when ready
Periodically merge in remote changes (#3)
5. Push changes out to your Fork repo
> git push origin new-feature
6. When Ready
Good tests are included? Bugs are out?
Log into your Github fork project and switch branches (using selector) to your new-feature branch
Verify contents; update Readme file
Navigate to original project and submit a Pull Request
Tuesday, March 12, 2013
Grails bootstrap environment
Every now and then we'd really like to know what Grails has mystically configured for us in that strange little world of the Bootstrap.
The following is a chunk you can paste into Bootstrap.groovy to see what is under the hood and available to you.
One of the more useful purposes I have found for this information is pulling out service classes for some sort of startup initialization.
The following is a chunk you can paste into Bootstrap.groovy to see what is under the hood and available to you.
One of the more useful purposes I have found for this information is pulling out service classes for some sort of startup initialization.
println "------- ServletContext Attributes -----------------------" def names = servletContext.getAttributeNames() names.each() {println "$it"} def ctx = servletContext.getAttribute('org.codehaus.groovy.grails.APPLICATION_CONTEXT') def osiv if (ctx) { def beans = ctx.beanDefinitionNames beans.sort() println "------- AppContext Beans -----------------------" beans.each() { if (it.indexOf("Interceptor") > 0 || it.indexOf('interceptor') > 0 || it.indexOf('Handler') > 0) { println "$it" } if (it == 'openSessionInViewInterceptor') { // Get the interceptor, check it state osiv = ctx.getBean(it) println "\t\t--> OSIV enabled?? ${osiv.isSingleSession()}" } } // Get the private interceptors field from the request handlerMapping class def field = org.springframework.web.servlet.handler.AbstractHandlerMapping.class.getDeclaredField('interceptors') field.accessible = true println "------- Interceptors via controllerHandlerMappings -----------------------" // Get this Field on the given object, the actual HandlerMapping that declares the interceptors def interceptors = field.get(ctx.controllerHandlerMappings) if (interceptors) { println "Got interceptors class: ${interceptors.class.name}" interceptors.each() { println "$it" } } else { println "Could not get interceptors class" } } else { println "No AppContext" } def app = servletContext.getAttribute('grailsApplication') def messageManagerService if (app) { println "\n-------------------------------------------------" println "------- grailsApplication -----------------------" println "-------------------------------------------------" println "\n------- Properties -----------------------" app.properties.each { key -> println "### $key" } println "\n------- All Artefact Classes -----------------------" def cz = app.allArtefactClasses cz.each { println it } println "\n------- Domain Classes -----------------------" cz = app.getArtefacts(DomainClassArtefactHandler.TYPE) cz = app.domainClasses cz.each { println "$it (${it.propertyName})" } println "\n------- Controller Classes -----------------------" //cz = app.getArtefacts(ControllerArtefactHandler.TYPE) cz = app.controllerClasses cz.each { println "$it (${it.propertyName})" } println "\n------- Service Classes -----------------------" //cz = app.getArtefacts(ServiceArtefactHandler.TYPE) cz = app.serviceClasses cz.each { println "$it (${it.propertyName})" } println "\n------- UrlMappings Classes -----------------------" //cz = app.getArtefacts(UrlMappingsArtefactHandler.TYPE) cz = app.urlMappingsClasses cz.each { println "$it (${it.propertyName})" } // Pull out my service class by registered bean name messageManagerService = app.mainContext['messageManagerService'] } else { println "No grailsApplication" }
Wednesday, March 6, 2013
eBook Cheat Sheet
Just some of my Tomboy notes about publishing eBooks.
Ebook Formats
Ebook Formats
- ePub - this is an open standard - meaning nobody owns it
- latest version is ePub3 which supports media overlays and audio/video
- supported by Nook, Kobo, and reader apps like Stanza
- mobi - this is Amazon's format for the Kindle
- this is replaced by KF8 for the newer devices
- supports more multimedia options
- roughly comparable to ePub2
- not as robust as ePub3
- why is my MOBI much bigger than my ePub?????
- it's because MOBI aggregates alternate formats into this one file to account for different devices
- this is OK - the size of your MOBI file will NOT be the size of the download - that will depend on what kind of device it's going to
- this is replaced by KF8 for the newer devices
- Calibre
apt-get install calibre
- Calibre
- Adobe Digital Editions
- Amazon Kindle Previewer
- Use Amazon's free KindleGen app
- Using Calibre
- supports ePubs
- install Calibre
- run the 'Share' server - specify the port it will run on
- from device, load the Stanza app
- configure host:port as a server in Stanza
- Using Amazon Kindle App
- supports MOBI
- Sign into Amazon, access Kindle Store (search bar)
- access Manage My Devices (top), then Devices (left)
- set up Personal Documents with a mail-to email address
- email the MOBI using the email address
- push to your device when doc appears under Personal Documents
- This only seems to cause trouble - and a barrier to sales!!
- At this point, it is far easier just to download your book than to steal it - it's not worth trying to copy your book for the 3 or 4 bucks it would cost - unless you are a teenage hacker
Friday, March 1, 2013
EmberJS - Putting a REST service behind the TodoMVC app
Background
I'm trying to select a front-end framework for new apps - who isn't these days? - and in researching I came across Addy Osmani and company's fantastic bit of work: http://addyosmani.github.com/todomvc/
After looking at this and some other stuff I decided to give Ember a try.
So why not start by hooking up a real REST service to this ToDo app? But first I would need to understand how Ember communicates with the server. A little research came up with the JSON Interface section in this post.
Implementation
I'm working with a Grails server, so providing the REST interface was pretty easy, in theory.
In practice, I decided to use a plugin for the REST interaction, and chose the json-rest-api for its simplicity. However I soon found out that it would not work out of the box - it needed upgrading to Grails2.1, and it did not speak the dialect of REST that Ember prefers.
So I set about modifying the plugin to support Ember-style, and maintain its original style as well. The resulting fork of the json-rest-api plugin is here.
Here are steps I took:
create grails app
install json-rest-api
install functional testing plugin
Added logging into Config.groovy
inside the environment {} block; also added similar to development {}
create Todo functional test
Wiring up the Ember interface
Time to modify the TodoMVC project to hook it up to my Grails app.
Found out that running in Tomcat7 did not allow changes to my JS files (grrr....)
Result
And voila - after modifying the REST plugin, the Grails app was pretty easy to accomodate to the Ember pulls. The resulting app is located here.
Installing
Download the app
git clone https://github.com/kentbutler/todomvc-grails-emberjs.git
Download the fork of the grails-json-rest-api (at least for now)
git clone https://github.com/kentbutler/grails-json-rest-api.git
Place them alongside each other and test the app by opening a console inside the directory and run:
grails test-app -functional
If tests pass then run the app via
grails run-app
If they do not pass, ensure the path to the json-rest-api inside of grails-app/conf/BuildConfig.groovy accurately locates the grails-json-rest-api plugin.
I'm trying to select a front-end framework for new apps - who isn't these days? - and in researching I came across Addy Osmani and company's fantastic bit of work: http://addyosmani.github.com/todomvc/
After looking at this and some other stuff I decided to give Ember a try.
So why not start by hooking up a real REST service to this ToDo app? But first I would need to understand how Ember communicates with the server. A little research came up with the JSON Interface section in this post.
Implementation
I'm working with a Grails server, so providing the REST interface was pretty easy, in theory.
In practice, I decided to use a plugin for the REST interaction, and chose the json-rest-api for its simplicity. However I soon found out that it would not work out of the box - it needed upgrading to Grails2.1, and it did not speak the dialect of REST that Ember prefers.
So I set about modifying the plugin to support Ember-style, and maintain its original style as well. The resulting fork of the json-rest-api plugin is here.
Here are steps I took:
create grails app
- grails create-app todo
- add Todo domain class with fields from TodoMVC todo.js
install json-rest-api
- added grails.plugin.location to the BuildConfig
- I already had this project downloaded locally
- using grails.plugin.location means changes to the plugin are automatically picked up
- changes to Todo domain class:
- add static 'expose' to Todo domain class
- add toJSON() and fromJSON() methods - my enhancement to the json-rest-api plugin to support i18n and custom rendering
// Adding Plugin-in: grails-json-rest grails.plugin.location.jsonrest = '/opt/projects/grails-json-rest-api'
class Todo { String title boolean isCompleted static constraints = { title(blank:false, nullable:false,maxSize:64) isCompleted(default:false) } String toString() { StringBuilder sb = new StringBuilder() sb.append("\n id: ").append(id) sb.append("\n Title: ").append(title) sb.append("\n Completed: ").append(isCompleted) sb.toString() } // --- json-rest-api artifacts --- static expose = 'todo' // Expose as REST API using json-rest-api plugin // this will be the entity name on the URL static api = [ // If allowing json-rest-api to use 'as JSON' to render, you may exclude // unwanted fields here (done with its registered ObjectMarshaller) excludedFields: [ "attached", "errors", "properties" ], // You may override how the list() operation performs its search here list : { params -> Todo.list(params) }, count: { params -> Todo.count() } ] /* // This is the standard way to override JSON marshalling for a class // It uses a ClosureOjectMarshaller[sic] to select fields for marshalling // It is less efficient for the plugin which is based on JSONObject, but this will be // used if you do not define a 'toJSON' method. // NOTE: if using this approach, the json-rest-api marshaller will NOT be used, hence the // api.excludedFields if defined will be ignored // Example taken from http://grails.org/Converters+Reference static { grails.converters.JSON.registerObjectMarshaller(Todo) { // you can filter here the key-value pairs to output: return it.properties.findAll {k,v -> k != 'passwd'} } } */ /** * Rending this object into a JSONObject; allows more flexibility and efficiency in how * the object is eventually included in larger JSON structures before ultimate rendering; * MessageSource offered for i18n conversion before exporting for user audience. * @param messageSource * @return */ JSONObject toJSON(def messageSource) { JSONObject json = new JSONObject() json.put('id', id) json.put('title', title) json.put('isCompleted', isCompleted) return json } /** * Custom bind from JSON; this has efficiency since the grails request.JSON object offers * a JSONObject directly * @param json */ void fromJSON (JSONObject json) { [ "title" ].each(JSONUtil.optStr.curry(json, this)) [ "isCompleted" ].each(JSONUtil.optBoolean.curry(json, this)) } }
install functional testing plugin
- grails install-plugin functional-test
- required to test the json-rest-api plugin (my change)
Added logging into Config.groovy
inside the environment {} block; also added similar to development {}
test { grails.logging.jul.usebridge = false log4j = { appenders { rollingFile name:"todo", maxFileSize:"10000KB", maxBackupIndex:10, file:"logs/todo.log",layout:pattern(conversionPattern: '%d{yyyy-MM-dd HH:mm:ss,SSS z} [%t] %-5p[%c]: %m%n') console name:'stdout', layout: pattern(conversionPattern: '%d{dd-MM-yyyy HH:mm:ss,SSS} %5p %c{1} - %m%n') //console name:'stacktrace' } debug 'grails.app','com.gargoylesoftware.htmlunit.WebClient','org.grails.plugins.rest',additivity = true warn 'grails.app.services.grails.buildtestdata','BuildTestDataGrailsPlugin','grails.buildtestdata', 'org.codehaus.groovy','org.grails.plugin','grails.spring','net.sf.ehcache','grails.plugin', 'org.apache','com.gargoylesoftware.htmlunit','org.codehaus.groovy.grails.orm.hibernate','org.hibernate' root { debug 'stdout', 'todo' additivity = true } } }
create Todo functional test
- using a Generic Mixin test class that I added into the json-rest-api project, resulting functional test class looks like:
@Mixin(GenericRestFunctionalTests) class TodoFunctionalTests extends BrowserTestCase { def log = LogFactory.getLog(getClass()) def messageSource void setUp() { super.setUp() } void tearDown() { super.tearDown() } void testList() { genericTestList(new Todo(title:"title.one")) } void testCreate() { genericTestCreate(new Todo(title:"title.one")) } void testShow() { genericTestShow(new Todo(title:"title.one")) } void testUpdate() { genericTestUpdate(new Todo(title:"title.one"), [title:"title.two"]) } void testDelete() { genericTestDelete(new Todo(title:"title.one")) } }
Wiring up the Ember interface
Time to modify the TodoMVC project to hook it up to my Grails app.
- pulled the TodoMVC source into my Grails project
- modified store.js to configure the REST adapter - default is DS.RESTAdapter, but some changes were required:
- modified the namespace to match my context and path that json-rest-api listens to (/api)
- had to extend the built-in RESTSerializer to stop its crazy conversion of my camel-case field names into underscore versions
// Override the default behaviour of the RESTSerializer to not convert // my camelized field names into underscored versions Todos.TodoRESTAdapter = DS.RESTSerializer.extend({ keyForAttributeName: function(type, name) { return name; //return Ember.String.decamelize(name); // this is the default behaviour }, keyForBelongsTo: function(type, name) { var key = this.keyForAttributeName(type, name); if (this.embeddedType(type, name)) { return key; } return key + "Id"; }, keyForHasMany: function(type, name) { var key = this.keyForAttributeName(type, name); if (this.embeddedType(type, name)) { return key; } return this.singularize(key) + "Ids"; } }); Todos.Store = DS.Store.extend({ revision: 11, adapter: DS.RESTAdapter.create({ bulkCommit: false, namespace: "todo/api", serializer: Todos.TodoRESTAdapter }) });
- found out that Ember sends the entity name to the server in the plural form sometimes, and the json-rest-api plugin does not like this; modified the plugin to account for this. See this other post for the breakdown of Ember's REST dialect.
- Ran instead via grails run-app, and added a logger to the 'development' env in Config.groovy in support of this
- (yes I tried fixing Tomcat by disabling caching/antiLocking in the servlet context)
Result
And voila - after modifying the REST plugin, the Grails app was pretty easy to accomodate to the Ember pulls. The resulting app is located here.
Installing
Download the app
git clone https://github.com/kentbutler/todomvc-grails-emberjs.git
Download the fork of the grails-json-rest-api (at least for now)
git clone https://github.com/kentbutler/grails-json-rest-api.git
Place them alongside each other and test the app by opening a console inside the directory and run:
grails test-app -functional
If tests pass then run the app via
grails run-app
If they do not pass, ensure the path to the json-rest-api inside of grails-app/conf/BuildConfig.groovy accurately locates the grails-json-rest-api plugin.
EmberJS, Notes and Gotchas
JSON Interface
Description of the expected REST interface, adapted from http://stackoverflow.com/questions/14922623/what-is-the-complete-list-of-expected-json-responses-for-ds-restadapter :
This is EmberJS's documentation on this:
http://emberjs.com/guides/models/the-rest-adapter/#toc_the-rest-adapter
Ember Gotchas!
and then use this in your Stores:
Pluralizing Strategy
Description of the expected REST interface, adapted from http://stackoverflow.com/questions/14922623/what-is-the-complete-list-of-expected-json-responses-for-ds-restadapter :
Context | ServerURL | Method | Req. Data | Resp. Data | |
---|---|---|---|---|---|
Get list of all | /users | GET | {"users":[{...},{...}]} | ||
Get one | /users/123 | GET | {"user":{...}} | ||
Create | /users | POST | {"user":{...}} | {"user":{...}} | |
Update | /users/123 | PUT | {"user":{...}} | {"user":{...}} | |
Delete | /users/123 | DELETE | N/A | null | |
Create in bulk | /users | POST | {"users":[{...},{...}]} | {"users":[{...},{...}]} | |
Update in bulk | /users/bulk | PUT | {"users":[{...},{...}]} | {"users":[{...},{...}]} | |
Delete in bulk | /users/bulk | DELETE | {"users":[1,2]} | {"users":[1,2]} |
This is EmberJS's documentation on this:
http://emberjs.com/guides/models/the-rest-adapter/#toc_the-rest-adapter
Ember Gotchas!
- Ember tolerates no unmapped fields in your JSON results! Presence
of these will result in an Ember.assert() error, which in Firefox
produced no logging whatsoever, just a silent failure!
- Example: if your data looks like this {"users":[{"name":"bob","id":3}], "count":1}
and Ember does not know about the 'count' field, JSON response processing will abort
- Example: if your data looks like this {"users":[{"name":"bob","id":3}], "count":1}
- Ember transforms camel-cased model field
names into underscored names when emitting JSON, and expects underscored
names in retreivals:
- e.g. model field 'isCompleted' will be sent in JSON updates as 'is_completed' - and must be sent as 'is_completed' as well
- WORKAROUND: you must explicitly map the attribute in the RESTAdapter - see the EmberJS documentation here
- or see the related Question below for a workaround
- Q: what is this optional 'meta' property we can put into our JSON data? as seen in extractMany()
- A: TBD
- Q: how can we get Ember to quit looking at fields as being underscored
- A: create a mapping in the REST adapter
- see http://emberjs.com/guides/models/the-rest-adapter/#toc_underscored-attribute-names
- OR, override the Serializer class - somewhere in global code, define your subclass
window.Todos.TodoRESTAdapter = DS.RESTSerializer.extend({ keyForAttributeName: function(type, name) { return name; }, keyForBelongsTo: function(type, name) { var key = this.keyForAttributeName(type, name); if (this.embeddedType(type, name)) { return key; } return key + "Id"; }, keyForHasMany: function(type, name) { var key = this.keyForAttributeName(type, name); if (this.embeddedType(type, name)) { return key; } return this.singularize(key) + "Ids"; } });
Todos.Store = DS.Store.extend({ revision: 11, adapter: DS.RESTAdapter.create({ bulkCommit: false, namespace: "todo/api" serializer: Todos.TodoRESTAdapter }) });
- Q: How to do the analagous replacement of underscore() when serializer is serializing back to REST?
- A: Here is how serializing to REST gets done:
- base class DS.Serializer.serialze() is invoked
- calls DS.Serializer().addAttributes() which loops through all attributes
- calls DS.Serializer().addAttribute() which can be overridden
- calls _keyForAttributeName()
- eventually calls keyForAttributeName() in the Serializer
- this is where we override
- calls _keyForAttributeName()
- A: Here is how serializing to REST gets done:
- Q: how should I represent nested data elements?
- A: use foreign key in owning object, and provide a sideloaded list of the related objects, and/or a custom transformation
- see http://emberjs.com/guides/models/the-rest-adapter/#toc_sideloaded-relationships
- findAll: function(store, type, since)
- contains a 'success': callback - set breakpoint here
- calls didFindAll()
- uses a Loader, acquired as DS.loaderFor(store), and a Serializer
- invokes serializer.extractMany(loader, payload, type)
- where payload is your data! and type = model name
- stepping into this puts you into a wrapper - step again into the 'apply()'
- first parses sideloaded data using sideload()
- see this for description of sideloading:
- extractRecordRepresentation()
- getMappingForType()
- if [ should sideload ]
- loader.sideload()
- else
- loader.load()
- store.load()
- this adds a prematerialization entry into a list for later parsing
- somewhere much deeper in the stack, we get to....
- DS.RESTSerializer.keyForAttributeName()
- which calls Ember.String.decamelize()
- this is where underscores are introduced
- store.load()
- loader.load()
- executed in method: pluralize()
- option 1: provide a mapping in definition of the RESTAdapter, as ....
adapter: DS.RESTAdapter.create({ bulkCommit: false,
- option 2: defaults to name + "s"
- maintains a 'stringCache' which JSON string field names are looked up against
- these are given codes like 'st295' where the # is the field uuid
Wednesday, February 20, 2013
Agile: Diagram-Driven Development
Agile methodologies have been helping software developers produce higher quality software in less time for almost ten years now.
However, there are always ways to improve. After observing teams function in some form of Agile for the last decade, I am left with the impression that there was a baby in the bathwater who may have been unwittingly tossed out.
Agreed - software documentation is not something that customers typically ask for, hence it is of lower value in Agile methodologies. It is not entirely without regard - some Agile processes like FDD and XP, for example, include short Design phases in their process, which one can interpret as generating some form of Documentation.
In practice though, teams trying to become Agile tend to cling to the concepts in the Agile Manifesto, which extoll production over documentation.
The Oversight
I believe there is room for better balance here. The original intention of "documentation for developers" - i.e. Diagrams - was to save time and money. It was a good idea, but it was hard to get right. Hence the entire practice was largely thrown out with Agile.
The Proof
People tend to do what is necessary to get their job done. Normally that includes installing and configuring an IDE, reading an Ant or Maven build script, and digesting source code.
In my experience, unless a project is very simple, as I am trying to digest source code, I tend to jot down class diagrams, activity diagrams, and deployment diagrams. I do this because with larger systems I cannot remember all of the business rules, the entity relationships, and where all the servers are.
A developer must understand the codebase to be productive. The more clarity, the higher quality the solution tends to be.
This requires a significant investment by each team member. Requiring each developer to take on this cost multiplies overall cost across the project - developers either bear the cost directly by doing the research, or indirectly via longer execution time for each task. A development task cannot be performed without some knowledge of the system.
The Ideal Solution
Of course we could say that ideally, there would be diagrams documenting each aspect of the system and they would be magically updated as things change. It is easy to see that this is not Reality.
Challenges include:
An effective way to relieve this tension is by clarifying and simplifying the documentation needs. I suggest reducing the set of documents to:
Entity diagrams are the easiest to manage, as they should always be generated. This requires a tool that will reverse-engineer classes. Open source solutions tend to fall short of this, however. Current options include:
Database tools tend to have diagramming capability at some level, but investigation is required to find the right solution for the primary database in use during development.
Capturing business rules, however, is a different story. Its challenges are unique:
These diagrams must be required to be maintained by any contributor working on the code. I would suggest that Test-Driven Development be augmented by Diagram-Driven Development in these cases.
Definition: Diagram-Driven Development
Test-Driven software development which is directed by business rules captured in diagrams. Can be applied to any significant body of business rules which will be implemented in code.
And in return, we have useful documentation which can be used to reassure The Customer that we know what we built.
However, there are always ways to improve. After observing teams function in some form of Agile for the last decade, I am left with the impression that there was a baby in the bathwater who may have been unwittingly tossed out.
Agreed - software documentation is not something that customers typically ask for, hence it is of lower value in Agile methodologies. It is not entirely without regard - some Agile processes like FDD and XP, for example, include short Design phases in their process, which one can interpret as generating some form of Documentation.
In practice though, teams trying to become Agile tend to cling to the concepts in the Agile Manifesto, which extoll production over documentation.
The Oversight
I believe there is room for better balance here. The original intention of "documentation for developers" - i.e. Diagrams - was to save time and money. It was a good idea, but it was hard to get right. Hence the entire practice was largely thrown out with Agile.
The Proof
People tend to do what is necessary to get their job done. Normally that includes installing and configuring an IDE, reading an Ant or Maven build script, and digesting source code.
In my experience, unless a project is very simple, as I am trying to digest source code, I tend to jot down class diagrams, activity diagrams, and deployment diagrams. I do this because with larger systems I cannot remember all of the business rules, the entity relationships, and where all the servers are.
A developer must understand the codebase to be productive. The more clarity, the higher quality the solution tends to be.
This requires a significant investment by each team member. Requiring each developer to take on this cost multiplies overall cost across the project - developers either bear the cost directly by doing the research, or indirectly via longer execution time for each task. A development task cannot be performed without some knowledge of the system.
The Ideal Solution
Of course we could say that ideally, there would be diagrams documenting each aspect of the system and they would be magically updated as things change. It is easy to see that this is not Reality.
Challenges include:
- comprehensive production of diagrams - do we have diags for everything?
- maintenance of diagrams - is anyone updating these as things change?
- finding a tool that does what we need and is easy to use
An effective way to relieve this tension is by clarifying and simplifying the documentation needs. I suggest reducing the set of documents to:
- Activity diagrams - Essential - for complex business rules
- Entity diagrams - Helpful - both class and database
- Deployment diagrams - Helpful - for servers in use on the project
Entity diagrams are the easiest to manage, as they should always be generated. This requires a tool that will reverse-engineer classes. Open source solutions tend to fall short of this, however. Current options include:
- the diver project
- Object Aid UML Explorer, an Eclipse plugin available at http://www.objectaid.com/update
- UMLet - an Eclipse plugin available at Eclipse Marketplace (although it seems un-downloadable at the moment?)
Database tools tend to have diagramming capability at some level, but investigation is required to find the right solution for the primary database in use during development.
Capturing business rules, however, is a different story. Its challenges are unique:
- worthwhile sets of rules tend to be complex, meaning they are difficult to actually diagram
- diagramming tools often require manual layout, which imposes maintenance costs for complex diagrams
- waste no time diagramming simple rules
- waste no time debating complex rules - let the diagram speak
These diagrams must be required to be maintained by any contributor working on the code. I would suggest that Test-Driven Development be augmented by Diagram-Driven Development in these cases.
Definition: Diagram-Driven Development
Test-Driven software development which is directed by business rules captured in diagrams. Can be applied to any significant body of business rules which will be implemented in code.
- developers and stakeholders agree on the set of rules, or modifications to existing rules
- ideally: rules are diagrammed in real-time during user story discussions
- rules are coded into test cases
- rules are coded into source code
And in return, we have useful documentation which can be used to reassure The Customer that we know what we built.
Monday, February 18, 2013
Performance: Grails vs. Java
Background
I wanted to compare the Grails platform vs. a straight Java platform with a similar stack, to figure out which would provide the best dev and deploy environment for new web projects. Grails has development advantages with its Groovy language and plugin architecture, but Groovy's rampant runtime indirection has an implied cost.
Before I ran into Grails I was a long-time user (and occasional submitter) of the AppFuse framework, an awesome rapid development platform for Java web-apps. I was rather pleased to find that Grails has quite a bit in common with the AppFuse stack. For example:
A comparison of the server portion of the two mentioned platforms would be a decent comparison of performance between a Grails webapp vs. a plain Java webapp.
Objective
My intention is to isolate the server-side processing as much as possible, by reducing client generation to a bare minimum. Ideally this test would eliminate client page generation altogether and simply invoke server operations via a REST/JSON interface.
I decided not to do that though since it would require significantly more effort, and also introduces I believe a variance in the processing; while AppFuse comes bundled with CXF for this purpose, Grails uses its native DSL on top of SpringMVC plus your choice of JSON processor to produce the same. While comparing these 2 would be interesting enough, it wasn't my primary objective.
Approach
To reduce variance in the platforms I am simply using SpringMVC to process two kinds of requests:
Environment
All of the tests were built and run with the following:
General Test Layout
To produce equivalent tests in both platforms I created 2 POJOs
Expectation
I fully expected the AppFuse platform to easily trounce the Grails platform, since Grails is encumbered by DSLs and Groovy, and does a lot of dynamic class decoration for things like installing Hibernate functionality onto domain classes.
Grails Test Creation
To produce the desired test in Grails I used the usual combination of Grails generators:
Then I modified the generated Person list view under grails-app/views/person/list.gsp to only display the total count of Person records.
A zip of the project can be downloaded here.
Java Test Creation
I used AppFuse's project creation maven plugin with the SpringMVC archetype to generate the project.
then included the full source from the AppFuse framework
and created 2 nearly identical POJOs, 'manually'
and generated the scaffolding
Note: This is where I learned that AppFuse requires Java6 due to an issue with the declaration of the Hibernate descriptors. (TODO: Citation)
and modified the generated Persons list view src/main/webapp/WEB-INF/pages/persons.jsp to only report the total count of Person records.
A zip of the project can be downloaded here.
Deployment
Both projects were independently deployed to Tomcat as packaged WARs and pummeled with 10k requests from a JMeter script.
Results
This is a snapshot of the JMeter results* side-by-side. The results I am focusing on are:
also available here.
*Note: The AppFuse result shows 20k+ total samples due to additional requests required to negotiate security; hence the "Person List" and "Add Person" results should be compared directly.
Complete results are here:
Grails app:
JMeter Performance Results
Zipped JProfiler output
Java app:
JMeter Performance Results
Zipped JProfiler output
Note: The Call Tree results for the Java app show an apparent anomaly in that the sub-tree does not seem to sum up to the parent folder:
I contacted the producer of JProfiler and was assured this is due to filtering, so not all of the child nodes are actually shown. I'm not very happy with how they decided to indicate that since the end result is that it is misleading, and how many managers will suffer through such an explanation - but worse IMHO is that their stance is that the numbers in the child nodes should not be expected to add up to their parent values. That has not been my experience with profilers over time. Anyway here was the explanation I received:
Conclusion
I scratched my head for a while when I first saw the JMeter results - then I decided to use a profiler. That's when I discovered and removed the horrible DisplayTag taglib. AppFuse was much more competitive afterwards.
But I am still trying to figure out where the significant differences lie. Some known differences are:
But I am left with concluding that Grails offers development benefits plus runtime benefits, and so will likely be my choice of platform.
Ways to Improve This Test
Some things I'd like to do when I get time:
I wanted to compare the Grails platform vs. a straight Java platform with a similar stack, to figure out which would provide the best dev and deploy environment for new web projects. Grails has development advantages with its Groovy language and plugin architecture, but Groovy's rampant runtime indirection has an implied cost.
Before I ran into Grails I was a long-time user (and occasional submitter) of the AppFuse framework, an awesome rapid development platform for Java web-apps. I was rather pleased to find that Grails has quite a bit in common with the AppFuse stack. For example:
- Spring IOC-based
- SpringMVC-based (one option among AppFuse's choices)
- Hibernate persistence
- AspectJ AOP
- SiteMesh templating
- SpringSecurity
A comparison of the server portion of the two mentioned platforms would be a decent comparison of performance between a Grails webapp vs. a plain Java webapp.
Objective
My intention is to isolate the server-side processing as much as possible, by reducing client generation to a bare minimum. Ideally this test would eliminate client page generation altogether and simply invoke server operations via a REST/JSON interface.
I decided not to do that though since it would require significantly more effort, and also introduces I believe a variance in the processing; while AppFuse comes bundled with CXF for this purpose, Grails uses its native DSL on top of SpringMVC plus your choice of JSON processor to produce the same. While comparing these 2 would be interesting enough, it wasn't my primary objective.
Approach
To reduce variance in the platforms I am simply using SpringMVC to process two kinds of requests:
- 'Create/Add' request
- a nearly empty 'Retrieve/List' request
Environment
All of the tests were built and run with the following:
- Oracle Java 1.6.0_38
- started with Java1.7.0_10, but was forced back - read on
- Linux Mint 14
- Grails 2.1.1
- AppFuse 2.x
- Maven 2.2.1
- MySQL 5.5.29 for debian-linux-gnu
- Apache Tomcat 7.0.32
- Apache JMeter 2.8
- JProfiler7
- Hardware, as reported by Gnome system monitor:
- Intel® Core™ i7-3610QM CPU @ 2.30GHz × 8
- 15.6 GiB
General Test Layout
To produce equivalent tests in both platforms I created 2 POJOs
- Person
- Account
- add new Person and Account
- list all Persons
Expectation
I fully expected the AppFuse platform to easily trounce the Grails platform, since Grails is encumbered by DSLs and Groovy, and does a lot of dynamic class decoration for things like installing Hibernate functionality onto domain classes.
Grails Test Creation
To produce the desired test in Grails I used the usual combination of Grails generators:
- grails create-app
- grails create-controller
class Person {
String name
Account account
static constraints = {
name(unique:true)
account(nullable:true)
}
String toString() {
return name
}
boolean equals(Object input) {
return this.name == input.name
}
}
class Account {
String accountId
Float balance
transient Float amount
static belongsTo = Person
static transients = ['amount']
static constraints = {
accountId(blank:false)
amount(blank:true,nullable:true)
}
}
String name
Account account
static constraints = {
name(unique:true)
account(nullable:true)
}
String toString() {
return name
}
boolean equals(Object input) {
return this.name == input.name
}
}
class Account {
String accountId
Float balance
transient Float amount
static belongsTo = Person
static transients = ['amount']
static constraints = {
accountId(blank:false)
amount(blank:true,nullable:true)
}
}
Then I modified the generated Person list view under grails-app/views/person/list.gsp to only display the total count of Person records.
A zip of the project can be downloaded here.
Java Test Creation
I used AppFuse's project creation maven plugin with the SpringMVC archetype to generate the project.
mvn archetype:generate -B -DarchetypeGroupId=org.appfuse.archetypes -DarchetypeArtifactId=appfuse-basic-spring-archetype -DarchetypeVersion=2.2.1 -DgroupId=com.uss -DartifactId=txtest3 -DarchetypeRepository=http://oss.sonatype.org/content/repositories/appfuse
then included the full source from the AppFuse framework
mvn appfuse:full-source
and created 2 nearly identical POJOs, 'manually'
@Entity
@Table(name="account")
public class Account extends BaseObject {
private Long id;
private String accountId;
private Float balance;
private transient Float amount;
@Id @GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() {
return this.id;
}
public void setId(Long id) {
this.id = id;
}
@Column(name="accountId", length=50)
public String getAccountId() {
return accountId;
}
public void setAccountId(String accountId) {
this.accountId = accountId;
}
@Column(name="balance")
public Float getBalance() {
return balance;
}
public void setBalance(Float balance) {
this.balance = balance;
}
@Transient
public Float getAmount() {
return amount;
}
@Override
public String toString() {
return "AccountId:: " + getAccountId();
}
@Override
public boolean equals(Object o) {
if (! (o instanceof Account)) {
return false;
}
return this.getAccountId().equals( ((Account)o).getAccountId() );
}
@Override
public int hashCode() {
return accountId.toCharArray().hashCode() + (id == null ? 0 : id.intValue());
}
public static void main(String[] args) {
HashSet s = new HashSet();
s.add("test");
s.add("test #2");
System.out.println("Hash contains test: " + s.contains("test"));
System.out.println("Hash contains test #2: " + s.contains("test #2"));
}
}
@Entity
@Table(name = "person")
public class Person extends BaseObject {
private Long id;
private String name;
private Account account;
@Id @GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() {
return this.id;
}
public void setId(Long id) {
this.id = id;
}
@Column(name="name", length=50)
public String getName() {
return name;
}
public void setName(String in) {
name = in;
}
@ManyToOne(optional=true,cascade=CascadeType.ALL,fetch=FetchType.LAZY)
@JoinColumn(name="acct_id", nullable=true, updatable=false)
public Account getAccount() {
return account;
}
public void setAccount(Account in) {
account = in;
}
public String toString() {
return "id: " + getId() + "\tname: " + getName();
}
public boolean equals(Object input) {
if (! (input instanceof Person)) {
return false;
}
return this.getName().equals( ((Person)input).getName() );
}
@Override
public int hashCode() {
return name.toCharArray().hashCode() + (id == null ? 0 : id.intValue());
}
}
@Table(name="account")
public class Account extends BaseObject {
private Long id;
private String accountId;
private Float balance;
private transient Float amount;
@Id @GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() {
return this.id;
}
public void setId(Long id) {
this.id = id;
}
@Column(name="accountId", length=50)
public String getAccountId() {
return accountId;
}
public void setAccountId(String accountId) {
this.accountId = accountId;
}
@Column(name="balance")
public Float getBalance() {
return balance;
}
public void setBalance(Float balance) {
this.balance = balance;
}
@Transient
public Float getAmount() {
return amount;
}
@Override
public String toString() {
return "AccountId:: " + getAccountId();
}
@Override
public boolean equals(Object o) {
if (! (o instanceof Account)) {
return false;
}
return this.getAccountId().equals( ((Account)o).getAccountId() );
}
@Override
public int hashCode() {
return accountId.toCharArray().hashCode() + (id == null ? 0 : id.intValue());
}
public static void main(String[] args) {
HashSet s = new HashSet();
s.add("test");
s.add("test #2");
System.out.println("Hash contains test: " + s.contains("test"));
System.out.println("Hash contains test #2: " + s.contains("test #2"));
}
}
@Entity
@Table(name = "person")
public class Person extends BaseObject {
private Long id;
private String name;
private Account account;
@Id @GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() {
return this.id;
}
public void setId(Long id) {
this.id = id;
}
@Column(name="name", length=50)
public String getName() {
return name;
}
public void setName(String in) {
name = in;
}
@ManyToOne(optional=true,cascade=CascadeType.ALL,fetch=FetchType.LAZY)
@JoinColumn(name="acct_id", nullable=true, updatable=false)
public Account getAccount() {
return account;
}
public void setAccount(Account in) {
account = in;
}
public String toString() {
return "id: " + getId() + "\tname: " + getName();
}
public boolean equals(Object input) {
if (! (input instanceof Person)) {
return false;
}
return this.getName().equals( ((Person)input).getName() );
}
@Override
public int hashCode() {
return name.toCharArray().hashCode() + (id == null ? 0 : id.intValue());
}
}
and generated the scaffolding
mvn appfuse:gen -Dentity=Account
mvn appfuse:gen -Dentity=Person
mvn appfuse:gen -Dentity=Person
Note: This is where I learned that AppFuse requires Java6 due to an issue with the declaration of the Hibernate descriptors. (TODO: Citation)
and modified the generated Persons list view src/main/webapp/WEB-INF/pages/persons.jsp to only report the total count of Person records.
A zip of the project can be downloaded here.
Deployment
Both projects were independently deployed to Tomcat as packaged WARs and pummeled with 10k requests from a JMeter script.
Results
This is a snapshot of the JMeter results* side-by-side. The results I am focusing on are:
- Person List: 19ms avg (Grails) vs. 112ms avg (Java)
- Add Person: 37ms avg (Grails) vs. 134ms avg (Java)
also available here.
*Note: The AppFuse result shows 20k+ total samples due to additional requests required to negotiate security; hence the "Person List" and "Add Person" results should be compared directly.
Complete results are here:
Grails app:
JMeter Performance Results
Zipped JProfiler output
Java app:
JMeter Performance Results
Zipped JProfiler output
Note: The Call Tree results for the Java app show an apparent anomaly in that the sub-tree does not seem to sum up to the parent folder:
53. |
5% - 5,217 ms - 5,642,324 inv. com.uss.model.Account_$$_javassist_3.getHibernateLazyInitializer |
2% - 3,271 ms - 5,642,324 inv. com.uss.model.Account.<init> |
7% - 1,081 ms - 5,642,324 inv. org.appfuse.model.BaseObject.<init> |
6% - 954 ms - 5,642,324 inv.com.uss.model.Account_$$_javassist_3.setHandler |
6% - 938 ms - 5,642,324 inv. com.uss.model.Person.setId |
6% - 925 ms - 5,642,324 inv. com.uss.model.Person.setAccount |
6% - 874 ms - 5,642,324 inv. com.uss.model.Person.setName |
I contacted the producer of JProfiler and was assured this is due to filtering, so not all of the child nodes are actually shown. I'm not very happy with how they decided to indicate that since the end result is that it is misleading, and how many managers will suffer through such an explanation - but worse IMHO is that their stance is that the numbers in the child nodes should not be expected to add up to their parent values. That has not been my experience with profilers over time. Anyway here was the explanation I received:
Thanks for your email. The explanation is this: "org.hibernate." is not a profiled package in the default session settings (se the "Filter settings" tab of the session settings). Only the first call into this package is measured. Then there are a lot of internal calls in that package and into other unprofiled packages that take some time. That time is attributed to "org.hibernate.Criteria.list". Deeper in the call tree some profiled classes are reached, for example in the "com.uss." packages. Those are shown separately. In general, the summed execution times of the children in the call do not have to add up to the execution time of the parent.
Conclusion
I scratched my head for a while when I first saw the JMeter results - then I decided to use a profiler. That's when I discovered and removed the horrible DisplayTag taglib. AppFuse was much more competitive afterwards.
But I am still trying to figure out where the significant differences lie. Some known differences are:
- AF includes the SpringSecurity filter, which I did not remove prior to testing
- however the profiler only shows this requiring < 1% of overall thread processing
- AF and Grails access to the Hibernate layer are similar - both use HibernateTemplate
- AF by default uses Criteria to list objects
- Grails .list() method uses Criteria as well (see org.codehaus.groovy.grails.orm.hibernate.metaclass.ListPersistentMethod)
But I am left with concluding that Grails offers development benefits plus runtime benefits, and so will likely be my choice of platform.
Ways to Improve This Test
Some things I'd like to do when I get time:
- disable the AF SpringSecurity filter
- look up a single record and access the Account sub-object
- add a one-to-many relationship
Friday, February 15, 2013
Photography Cheat Sheet
Trying to get my head around my digital camera.
Aperture
Aperture
- Def: the lens diaphragm opening; i.e. how large the shutter curtain opens
- Calibrated in f-numbers, a.k.a. f-stops
- f22 (f/22),16 (f/16), f/11, f/8.0, f/5.6, f/4.0, f/2.8, f/2.0, f/1.8 etc.
- indicate inverse size: f/22 is smallest, f/1.8 is biggest
- Effect: controls amount of light entering
- each smaller step (moving towards f/1.8) lets twice the light in
- 2nd Effect: affects Depth of Field (DOF)
- see section "Depth of Field"
- http://www.mir.com.my/rb/photography/fototech/apershutter/aperture.htm
- Note: actual f-number diameter is unique to the lens -
e.g. wide-angle lens will open differently than narrower, to get same f-val
- How quickly the shutter opens and closes
- Effect: slower speed lets in more light
- Indicated as fraction of a second:
1/8000, 1/4000, 1/1000, 1/500, 1/250, 1/125, 1/60, 1/30, 1/15, 1/8, 1/4, 1/2, 1 or -1, -2 etc. - Effect: Regulates amount of light allowed in
- each slower speed allows double the light of the previous
- 2nd Effect: Can freeze or blur a photo
- Exposure = Aperture + Shutter Speed
- Hence, to maintain same Exposure:
increase in Aperture <=> decrease in Shutter Speed
translates as: reduce f-stop <==> increase ASA speed
also can say: f-stop towards f/1.8 <==> ASA speed towards 1/8000
- Def: the focus of close and far away objects
- Smaller aperture extends DOF
- Deep DOF = everything is in focus = moving towards f/22
- Shallow DOF = only close objects in focus = moving towards f/1.8
- Focal length of lens affects DOF
- Wideangle have deeper DOF
- hence good for scenery shots
- Telefoto have shallower DOF
- e.g. use larger aperture with telefoto to blur a background
- Wideangle = 35mm; 50mm is standard; 80mm+ is Telefoto
- Wideangle have deeper DOF
Thursday, January 17, 2013
The Switch to Linux
I upgraded my home WinXP box to Windows7 back in 2012, and after hearing so much hype I was left rather disappointed. Although I felt it looked really nice, I couldn't help but feel the bloat was not balanced by real functionality. They were still putting lipstick on a pig.
That's when I got serious. I bought an MSI laptop (specs TBD) with the obligatory Windows7 install, and as soon as I could I downloaded and installed Linux Mint 14. It was easier than any of the probably 100+ Windows installs I've done over the years.
I'll admit I am a developer and used Unix at my day job for almost 10 years in the past, so switching to Linux was always on my mind. But there were some other compelling reasons as well:
And while we're talking about apps, here are some examples of things that made the transition easy:
Some Kinks still to be Worked Out
Sure it's not all roses, for example:
That's when I got serious. I bought an MSI laptop (specs TBD) with the obligatory Windows7 install, and as soon as I could I downloaded and installed Linux Mint 14. It was easier than any of the probably 100+ Windows installs I've done over the years.
I'll admit I am a developer and used Unix at my day job for almost 10 years in the past, so switching to Linux was always on my mind. But there were some other compelling reasons as well:
- first of all it's FREE
- it looks darn good
- the environment is tuned towards productivity, with multiple desktops and flexible hotkeys (Ctrl-Alt-UpArrow gets you this bird's-eye view of your workspaces - there's also a hot corner for that)
- built-in power tools like grep, cut, sed and awk; and a sensible command shell
- an industrial-strength file system - which after all, is largely what an OS is about
- tons of choices in free apps
- a choice in file managers that don't suck
And while we're talking about apps, here are some examples of things that made the transition easy:
- the Desktop and Menu area very similar to Windows
- LibreOffice - a full MSOffice replacement -
- I had already switched to this from MSOffice, after the frustration of not being able to install from my $200+ CDs after upgrading to Win7
- GIMP - a replacement of Photoshop, and...FREE
- PasswordSafe - a Linux equivalent that reads the same file format as the Win version, which I had been using; BTW this stores your passwords in one secure place - if you don't use something like this, YOU NEED TO
- Thunderbird email -- very similar to Outlook Express
- Firefox, of course
- Google Chrome also available, although typical to Google style it is currenty in Beta
- Banshee music player, pretty similar to iTunes
- Calendar, Clock, Calculator
- System monitor
- System configuration tools
- Program install tools
- A couple video players that work out of the box
- Tomboy - a great way to take notes!! Simple, organized - no waiting for Word to load....where was that doc again?? Exports to CLEAN HTML for easy blogging
- Character Map - look up and copy individual characters in each loaded system Font
- Font Viewer - little standalone app to try out Fonts
- Automount of USB devices - quick access to my iPhone files
- Sound Recorder - just a little app to record audio
- Phatch - a batched photo converter, useful for blogging
- i.e. dump a bunch of JPGs in a directory, run Phatch, and boom they're resized for uploading in 5 seconds
Some Kinks still to be Worked Out
Sure it's not all roses, for example:
- if you want to play your Windows games, you'll have to install Wine or PlayOnLinux and get that working
- iTunes vs. Banshee
- the convenient iTunes Store integration exists in Banshee as a plugin to Last.FM, but it's not very convenient
- burning CD's in Banshee still produces nothing but drink coasters for me
- there is still a lack of support for Linux from some software vendors, such as
- the drivers for your hardware device - e.g. my MSI laptop in not supported and I had to buy an ethernet USB connector, and external video camera
- various plugins, e.g. NASA's interactive Eyes on the Solar System is only supported on Windows and Mac
Tuesday, January 1, 2013
Why Van Helsing?
It's not just because I'm reading Bram Stoker's Dracula right now and it's awesome. But in my 20+ years in tech life I find myself constantly battling demonic monsters...leaping fully clothed into murky pools of water to scrape out an answer that lies hidden under 20,000 years of muck (measured in CPU time of course).
Often this occurs in the wee hours of the night, with no moon and a heavy gloom hanging over the earth. Sometimes we dispel the monster and the grid displays the widget. And we can sleep.
Often this occurs in the wee hours of the night, with no moon and a heavy gloom hanging over the earth. Sometimes we dispel the monster and the grid displays the widget. And we can sleep.
Subscribe to:
Posts (Atom)