Pages

Friday, September 25, 2009

Vaccuum Your Firefox

This has been blogged about in the usual places already, but for those who haven't seen it:
There's an addon for Firefox that defragments its sqlite database, called Vacuum Places. I recently gave this a whirl and was pleased with the results. The responsiveness of my address bar was noticeably improved. Now, bear in mind that I use Firefox a lot. I have over 4000 bookmarks -- its kinda getting out of control. Those who don't use it as much probably won't notice much of a difference, but give it a whirl and see what it does for you. Note that there is a disclaimer saying to backup you profile (easily done with MozBackup), but I haven't had any issues with it.

Some other addons you might want to check out:
Adblock Plus - never see another ad
SkipScreen - auto waits on file upload sites
DownloadHelper - download youtube videos, a page of images at once, and more

DownThemAll! - download everything on a page, based on filters (all pdfs, all images, etc)
Greasmonkey - run custom javascript in website to do all sorts of nifty things (use Greasefire to light up when scripts are available on the current site or get some here)
Stylish - loads custom css into sites (get some here)

Tab Mix Plus - one of the biggest reasons I use it is for the 'duplicate tab' option, but the 'close other tabs', 'close right tabs', and 'close left tabs' are pretty nifty too.

Duct Tape Programmers

Wednesday there Joel blogged about the first chapter of Coder's At Work, where he admired duct tape programmers that were willing to skip unit tests and code quality in order ship on time. I don't really have much to say about it that Uncle Bob and his commenters have said. I agree with just about everything Bob said except for
I found myself annoyed at Joel’s notion that most programmers aren’t smart enough to use templates, design patterns, multi-threading, COM, etc. I don’t think that’s the case. I think that any programmer that’s not smart enough to use tools like that is probably not smart enough to be a programmer period.
In an ideal world, this would be true. And as the industry matures, I think this is becoming increasingly true. However, there are still some programmers out there that aren't very talented and probably shouldn't be programming, and yet they still are. Now I'm not an advocate of making your life difficult to weed out the less talented, but our ever increasing toolbox is lowering the barriers to entry. This isn't necessarily a bad thing. I have a fair amount of faith in the invisible hand and if this allows businesses to get their needs met, then so be it. Just be aware there are people programming that aren't the sharpest knives in the box out there, and they may even be getting away with it...at least for now.

Wednesday, September 23, 2009

Don't Bother Testing for Null?

This appeared on Rainsberger's blog a couple of days ago
Unless I’m publishing an API for general use, I don’t worry about testing method parameters against null. It’s too much code and too many duplicate tests. Besides, I would be testing the wrong thing.
When a method receives null as a parameter, the invoker—and not the receiver—is missing a test
I think I've got to disagree with him on this one. Unless I'm misunderstanding 'general use'. It might be OK not to test for null if it's just your code calling your code (of course what happens when this gets pushed out, maybe serviceized and suddenly others are calling it?) Even if you do not guarantee the right results if the method is passed a null, I would think you should at least make sure that it doesn't blow up. If it fails, it should fail gracefully. It's true that it's not the responsibility of the receiver to make sure the invoker is calling it correctly, and I wouldn't expect a test for every conceivable possibility necessarily. But nulls happen all the time, its not unreasonable to expect a quick check. There wouldn't even be a lot of code. something like
if (someVar == null) {
    Log.error("it was null");
    return;
}

would do nicely. I do agree there is test duplication between the reciever and invoker, in that both are testing for the null condition on the same variable, but this doesn't really bother me. It seems to me there are lots of tests that have overlap or dependencies. Maybe I'm just crazy. I left a comment on the post.

Monday, September 21, 2009

Extending NTFS with Bad Sectors

I wanted to grow an NTFS partition to use unallocated space this weekend on an XP machine. If it were Vista or Win7, I would have used the built in resizing ability in disk management, and I found I could not do so with my usual tool of choice (GParted Live CD). The error message said that it could not complete the operation because of two bad sectors, and to run chkdsk (which I did) and then use ntfsresize with the --bad-sectors option. When I tried this, it said it couldn't grow it unless I make it bigger with fdisk. The only way I know how to do this would be to create an entirely new partition (which would mean all the hassle of reinstalling windows and the needed apps). I was finally able to do it with EASEUS Partition Master Personal. Unfortunately, it is not open source, but it is free and it did what I was trying to do without restarting in minutes.

Friday, September 18, 2009

Types of Reviews

An interesting discussion was started in this month's section meeting about code reviews. People brought up the point that catching flaws in design implementation requires knowledge of the requirements and that not having such knowledge allows the reviewer to only look for bad programming practices in general. Of course, I'm new to all this, but it seemed logical to me to have three kinds of reviews.

- Design Review

- Implementation Review
- Code Review



The purpose of a design review would be to review the design of the system. This is a higher level view and shouldn't involve any code. The purpose of an implementation review is to review how well the code meets the requirements (somewhat similar to acceptance testing), this test is probably the most time intensive of the three. A code review is a review of the code itself (perhaps with some context, but not at the level of an implementation review. This should focus on errors in logic, and adherence to accepted coding standards (not style). Am I missing something that should be reviewed?


I think when I prepared code for review, it helped when I attached an overview of the classes and what their purpose and general relation to each other was. If a class is very large (probably needs refactored), it may also help to attach an overview of it (at least for the methods most important to complete its job).



It was also pointed out that having people read over requirements and code would take a fair amount of time, which is time not spent on other tasks. An unanswered question was whether pair programming should complement standard reviews or replace them altogether, and how time could be used most efficiently. Everyone agreed that in some way the feedback loop needed to be shortened, whether that be through pair programming or reviewing more often over smaller amounts of code or some other means.

Monday, September 14, 2009

Why Cowon is Awesome

The COWON S9 is infinitely more awesome than the iPod and the Zune. Here are a few reasons why: AMOLED touch display, video playback, FLAC and OGG compatibility (though unfortunately no apple lossless), a completely customizable interface (flash/actionscript), Themes and more themes, games and apps, drag and drop file syncing (no iTunes or special software required), highly customizable equalizer. And they regularly provide free updates. On August 17, they updated the firmware with even more features (previous update was July 24). It also has a microphone and a radio. I also hear they are working on porting rockbox to COWON.

I was a bit nervous when I purchased this, I carefully read reviews and so forth about it, but its a Korean brand I had not heard of before. I've owned it for about three months now and I have absolutely no regrets. It doesn't have quite the storage capacity of iPod or Zune (30GB), but don't think I could fit my entire collection on them anyway. I keep my music in three different places on my computer, some mp3, some flac, some ogg. With my S9, I can just drag and drop whatever in there and it maintains the same folder structure so you can easily find things (ever try making sense of the organization iPod uses?). I'm using my music collection in ways that were simply not possible with my old iPod. If your iPod or Zune has recently bit the dust, I highly recommend.

The Not So Answered Variables in GPathResult

A while ago, I posted that I had gotten the answer on how to get the number of records in an xml file using XMLSlurper (or XMLParser) by passing a variable. It turns out, it was not so answered. This only worked for elements that are immediate children of the root. It seems XMLSlurper and XMLParser can't get longer paths from a single String, it needs to be broken up. (Not sure why it was implemented this way). Fortunately, our own brilliant, Jonathan Baker noticed this and suggested the solution below. It's not complicated once you figure out that the String has to be broken up:

def xmltxt = """
<file>
  <something>
    <record>
      <name>nakina</name>
    </record>
    <record>
      <name>buaboe</name>
    </record>
  </something>
</file>"""
 
def xml = new XmlSlurper().parseText(xmltxt)
def fullPath = 'file.something.record'

def pathElements = fullPath.tokenize('.')
pathElements -= xml.name()
def root = xml
pathElements.each {node ->
    root = root."$node"
}
return root.size()

You could also use split, as John Wagenleitner suggested in response to my comment on his answer on StackOverflow:
def xmltxt = """
<file>
  <something>
    <record name="some record" />
    <record name="some other record" />
  </something>
</file>"""
def xml = new XmlSlurper().parseText(xmltxt)
String foo = "something.record"

def aNode = xml
foo.split("\\.").each {
  aNode = aNode."${it}"
}
return aNode.size()

My thanks to you both.

After talking with Josh, I understand why the Groovy people had to do what they did. Dots (.) are still legal in XML element names, as long as the name doesn't start with a dot. (Actually, other punctuation is allowed as well, though it is not recommended). I don't think they could use forward slashes as XPath does because of how Groovy has overloaded the operators and dots had to be allowable, so this is what we're stuck with. Maybe they should have used spaces instead, since those aren't legal in element names. I don't know the impact this would have on other classes.

Friday, September 11, 2009

So, Why The Name?

Thought I would give credit where credit is due. I'm not a terribly creative person, if someone else has something creative I can use, I'll most likely rip it off. 'Witty Keegan' is one of the nicknames bestowed upon me by my friend and college roommate, Vitus Pelsey. ('Keegasaurus' being one of the others). I thought a pun would make for a good blog title, while giving a false impression of witticism and intelligence.

Bad GString! Bad! --- Wait, My Bad

I was trying to write a test for a method that inserted some stuff into a StringBuilder that was modified as a GString and was frustrated to find that my stubbed method was never called!

Initially I got mad, blamed Groovy for making my life more difficult with its ridiculous automagical boxing. Then Josh pointed out that StringBuilder is actually a CharSequence. It's not a Groovy thing, I'm just dumb. Fire up your GroovyConsole and observe:

StringBuilder.metaClass.insert = {int arg0, Object arg1 ->
  println "FOOO"
}
StringBuilder.metaClass.insert = {int arg0, String arg1 ->
  println "BARR"
}
StringBuilder.metaClass.insert = {int arg0, CharSequence arg1 ->
  println "I'M HERE!!!"
}

StringBuilder sb = new StringBuilder()
def foo = "ROGER"
def bar = "$foo"
sb.insert 0, bar
The result is "I'M HERE!!!". All I had to do was stub the right method.

If you're experiencing String vs GString issues, like I thought I was, you may find this helpful.

Integration Tests Are a Scam?

I recently listened to a talk given by J. B. Rainsberger (author of JUnit Recipes) with the title Integration Tests Are a Scam (summary notes here). If the idea seems crazy, blame the fact that he's from Canada ;) These are some quick thoughts I had, I may expand on them later.

Here's some definitions he gives:
Basic Correctness
"Given the myth of perfect technology, do we compute the right answer?"
Myth of Perfect Technology
"Assuming we can use an arbitrary large amount of memory, for an arbitrary amount of time, on a Turing machine for spherical people[...]"
Integration Tests
"...any test whose result (pass or fail) depends on the correctness of the implementation of more than one piece of non-trivial behavior."
"You should never need to write an integration test to show basic correctness." He believes our largest problems lie in basic correctness. After we get this right, then we can worry about issues of performance, security, etc. The question of basic correctness is where he focuses his efforts. (He paraphrases a quote I believe based on the Pareto Principle).

Downsides of integration testing:
  • Intergration tests are slow
  • Integration tests don't tell you where the failure occurred (may be difficult to find even with debugger, assuming TDD hasn't caused you to forget how to use one)
  • In order to have enough tests at the integration level to test thoroughly, the number of tests that need to be written increases combinatorially, based on code paths
  • There is a lot of duplication in test setup

Now, it should be noted that he is not talking about acceptance tests. He says that acceptance tests tend to be end-to-end, and that is OK. But end-to-end tests should not be used for developer tests. He is also not altogether against integration tests for finding bugs, he just doesn't want them permanently added to the project. Bugs found through an integration test should create new object tests. "I don’t doubt the necessity of integration tests. I depend on them to solve difficult system-level problems. By contrast, I routinely see teams using them to detect unexpected consequences, and I don’t think we need them for that purpose. I prefer to use them to confirm an uneasy feeling that an unintended consequence lurks."

Instead, he recommends 'collaboration tests' (commonly called 'interaction tests') and 'contract tests'. By collaboration tests, he means to stubbing out or mocking the collaborators to isolate functionality and make sure all the ways it can interact with collaborators behave as expected. This is 1/2 of the work (and actually the easier 1/2). You've checked if you've asked the right questions and able provide an answer for all the responses.
The missing piece (that commonly causes people to rely on integration tests) is a misunderstanding between the interaction of piece in question and its collaborators.

The second 1/2 is 'contract tests'. The first of the two checks on the other side of the interface is whether the collaborator able to provide a response when "the star" (Class in Test CIT) asks for it (is it implemented? can it handle the request in the first place?). The second is whether the the collaborator responds in the way the CIT is expecting. "A contract test is a test that verifies whether the implementation respects the contract of the interface it implements." There should be a contract test for every case we send the collaborator and every case the collaborator might send back. Again this will using stubbing and mocking. The advantage of this approach is that you know when you have enough tests (two for each behavior). I've tried to diagram the idea thusly:

He claims that if you ask these questions between every two services and focus on basic correctness, we can be "arbitrarily confident" in the correctness. The number of tests increases additively instead of combinatorially and is easier to maintain, with less duplication, and faster to run. If something goes wrong, you are either missing a collaboration test or missing a contract test or the tests do not agree. This makes troubleshooting easier. As of yet, there is no automated way of testing that every collaboration test has a matching contract test.

When I saw the title of the talk, I initially reacted rather violently against the notion. I'm still not sure if I'm 100% behind it, but I think there are some good points raised about integration tests and their utility. However, as Dan Fabulich points out in a reply to a response Rainsberger gave to a comment about a Mars rover failure, figuring out that you are missing a test may not come easily.
"The ability to notice things" is high magic. If you have that, you can find/fix any bug without any tests... why don't we all just "notice" our mistakes when writing production code? In this case you're just using intuition to notice a missing test, but that's no easier than noticing bugs.

As you know, I share your view that integration tests are tricky, in the sense that writing one tempts you into writing two, where instead you should be writing more isolated unit tests. But unit tests have the opposite problem: once you have some unit tests, it's too easy to assume that no more testing is necessary, because your unit tests have covered everything. By exaggerating the power of unit tests and the weakness of integration tests, you may be doing more harm than good.

Imagine you're actually coding this. You just finished writing testDetachingWhileLanded and testDetachingWhileNotLanded. (It was at this point in your story that you first began to "notice" that a test was missing.) You go back over the code and find you have 100% branch coverage of the example application. Your unit tests LOOK good enough, to a superficial eye, to an ordinary mortal. But you're still missing a critical test. How are you supposed to just "notice" this?

More generally, how are you supposed to build a habit or process that notices missing tests *in general*?

I've got just the habit: write all the unit tests you can think of, and then, if you're not sure you've got enough unit tests, do an integration test. You don't even necessarily have to automate it; just try it out once, in real life, to see if it works. If your code doesn't work, that will help you find more unit tests to write. If it does work, don't integration-test every path; you were just double-checking the quality of your unit tests, after all."
<edit>
While I wouldn't go so far as calling it 'magic', finding all the edge cases can be difficult and may require a fair amount of knowledge about the collaborator. Rainsberger later commented that his method of ensuring every condition is tested is
Every time I stub a method, I say, "I have to write a test that expects the return value I've just stubbed." I use only basic logic there: if A depends on B returning x, then I have to know that B can return x, so I have to write a test for that.

Every time I mock a method, I say, "I have to write a test that tries to invoke that method with the parameters I just expected." Again, I use only basic logic there: if A causes B to invoke c(d, e, f) then I have to know that I've tested what happens when B invokes c(d, e, f), so I have to write a test for that.

Dan Fabulich suggests adding either "Every time I stub a method that can raise an exception, I have to stub it again with a test that expects the exception" or "Every time I stub a method to return X, I also have to write a test where the stub returns Y. And Z. For all possible return values of the method." Of course, it's impossible (or at least very difficult) to be sure you've gotten all edge cases.

My takeaway from all this is that integration tests are overused, often perhaps as a half-baked attempt to remedy poor unit tests (even though the two different tests try to solve different problems). While I'm not quite ready to do away with integration tests entirely (I think they provide a useful documentation of examples of use without going into the nitty gritty details of a unit test and make a nice supplement to unit tests), I think one should recognize their place: performance testing, and as general review. NOT for finding bugs or ensuring changes didn't break anything and certainly not for finding where they occurred. One should add them as a separate module that is only built when requested, or using something like the FailSafe plugin for Maven.


</edit>

One idea that he mentions early on in the talk is the idea of having only one assert per test. This is something I'm occasionally guilty of (especially if the method being tested does several things). This should be a testing smell that may indicate the need for some refactoring.

He also mentions what first got him interested in TDD, which I thought was one of the most compelling reasons I've heard so far to use TDD. When you don't use TDD you have a seemingly endlessly depressing cycle of writing tests, fixing bugs, writing more tests, and so on...how do you know when you're finished? When you do TDD, it has a bit more definitive ending point:
  • Think about what you want to do
  • Think about how to test it
  • Write a small test. Think about the desired API
  • Write just enough code to fail the test
  • Run and watch the test fail. (The test-runner, if you're using something like JUnit, shows the "Red Bar"). Now you know that your test is going to be executed
  • Write just enough code to pass the test (and pass all your previous tests)
  • Run and watch all of the tests pass. (The test-runner, if you're using JUnit, etc., shows the "Green Bar"). If it doesn't pass, you did something wrong, fix it now since it's got to be something you just wrote
  • If you have any duplicate logic, or inexpressive code, refactor to remove duplication and increase expressiveness -- this includes reducing coupling and increasing cohesion
  • Run the tests again, you should still have the Green Bar. If you get the Red Bar, then you made a mistake in your refactoring. Fix it now and re-run
  • Repeat the steps above until you can't find any more tests that drive writing new code
(from the C2 wiki)
I like that. This would help address my previously mentioned fear of knowing when you've tested everything. (Though I'm sure it's not foolproof).

Thursday, September 10, 2009

iTunes Pains (Again)

This time the upgrade to iTunes 9 broke my beloved MediaMonkey. I really should just use something else to get my podcasts and blow away iTunes. I don't even have an iPod anymore and have no plans of getting an iPhone -- there's really no reason to keep this piece of trash on my hard drive.
A work around was posted here: http://www.thebitguru.com/blog/view/310-MediaMonkey%20and%20iTunes%209
Granted, its not iTunes' fault per se, and I'm sure MediaMonkey will get it fixed soon, just damned irritating. I didn't follow his instructions exactly...what I did was rename d_iPhone.dll to d_iPhone.dll.disabled in the MediaMonkey plugins folder in program files.

EDIT:
MediaMonkey has pushed out a beta release (a bit earlier than they were planning) that fixes compatibility with iTunes 9, only 1 day after it came out: http://www.mediamonkey.com/beta/MediaMonkey_3.1.2.1267.exe.  It might be better to delete the file rather than rename it (if you're comfortable with that), then it will be replaced when the new installer is run.

Beware of openStream

Yesterday we discovered a bug where one of our projects hung and had to be CTRL-C'd. The culprit ended up being one line:
URL url = new URL("http://someurl")
InputStream is = url.openStream()//<---this one
The openStream method is actually shorthand for openConnection().getInputStream(). So it returns a newly instantiated URLConnection and calls getInputStream() on that. The problem is, which the API won't tell you (but is visible in the source) is that the default values for connectTimeout and readTimeout are 0. This means, if the connection/read fails, it will continue to try to connect/read forever. While we tested for the connection to be good before we began processing, getting from a URL caused it to hang when the service went down in the middle of processing. The solution was mentioned in this StackOverflow question. The solution lies in not creating the InputStream from URL, but from URLConnection and setting the timeouts:
int timeoutMS = 5000 // 5 secs
try {
  URL url = new URL("http://someurl")
  URLConnection conn = url.openConnection();
  conn.setConnectTimeout(timeoutMs);
  conn.setReadTimeout(timeoutMs);
  InputStream is = conn.getInputStream();
} catch (java.net.UnknownHostException uhe) {
    // something useful here
} catch (java.net.SocketTimeoutException ste) {
    // something useful here
}
Eric has posted about this as well (it was his project we learned this from).

Friday, September 4, 2009

Unit Testing Struggles

So, over the last 1.5 weeks or so I've been having a bit of an identity crisis with unit tests. Sure, I got the basic idea of writing tests for each method in college but like most never used it much while in school. Now that I'm at OCLC and unit testing is expected, I'm trying to develop my own philosophy on the matter and get a feel for it.

I felt like I should bring code coverage up on the project I was working on, but for purely religious reasons. Now, granted, this project is heavy in IO (it's the same project I mentioned previously). So perhaps I'd see less benefit from unit tests on this project than others, but despite the fact I now have >70% coverage (up from 0%), I haven't found a single bug by using unit tests, only with integration tests. The results might also have been different had I used TDD or my home-brewed syncretic TOD approach.

Despite these facts, I don't argue that Unit Testing used with CI can help as a tool for preventing software regressions and as up-to-date documentation for the code (though I still don't find it a very natural read except for BDD approaches like EasyB). It's also useful for finding the root cause of a problem. Whereas an integration test might only be able to say "something blew up" a unit test might be able to tell "here is what blew up". However, the value of the unit tests I've created I think remains to be seen. Meanwhile, it did deliver value as a learning platform.

Groovy has served as an excellent testing platform for me. This particular project was written in Groovy, but I think this would work well with Java projects as well (there is, however, the slight overhead of the additional dependency). I was able to do everything I wanted (still a few kinks to work out) with stubbing using the wonderful, magical, ExpandoMetaClass. There are a few tests I have yet to do where I may have to use their mocking framework.

A couple gotchas:
// getters cannot be overridden using just the property name, even though they can be called that way
class Foo {
  int bar
}

Foo.metaClass.getBar {->
  return 44
}
foo1 = new Foo()
assert foo1.bar == 44  // this passes

GroovySystem.metaClassRegistry.removeMetaClass Foo
Foo.metaClass.bar {->
  return 42
}
Foo foo1 = new Foo()
assert foo1.bar == 42  // this does not

I wanted to send some precanned input to a method that uses BufferedReader to get its input. The constructor from it eventually constructs a File to get the data. I can't extend File or create a new interface with all the File stuff for a test, because that would require modifying BufferedReader and Reader classes to match these changes. I've not found a way around this.

Another problem was a method that takes a String path of a file that contains filepaths of input files to process and adds the string and a BufferedReader to that file to a collection (not sure why that decision was made). So, I tried to mock out eachline(). But there is a problem...
// you cannot metaclass constructors, therefore, this code doesn't work. I've still got to figure out a way of faking a file, since File cannot use map coercion because it has no default constructor
String aPath = "a/path/to/file"
String fakeData = "some\nfake\nstuff\n"

File.metaClass.init {String filePath ->
  def mock = [eachLine: {return "${fakeData}"}, exists: {return true}] as File
  return mock
}

File f = new File(aPath)  // doesn't work

There are still some larger questions I have that maybe I'll never get THE ANSWER to. One question in particular I've been struggling with is "How simple is too simple to break?" The JUnit authors suggest this is a never-ending source of pain:
becomeTimidAndTestEverything
while writingTheSameThingOverAndOverAgain
    becomeMoreAggressive
    writeFewerTests
    writeTestsForMoreInterestingCases
    if getBurnedByStupidDefect
        feelStupid
        becomeTimidAndTestEverything
    end
end
And it's still very easy for me to lose sight of what I'm actually testing in the midst of all the mocking, stubbing, and so forth. More than a few times this last week I've looked down and realized that what I've written is so paranoid that it is really testing stuff that can only fail if the compiler or JVM fails or cosmic rays come down and change my data.

Just as important as making sure your tests pass is making sure they fail. I struggled with this the most when I started this process. I thought "wonderful, everything works." When it turns out that the code didn't work quite the way I thought it did and my tests were actually written in such a way that they would NEVER fail. All those green bars might not actually mean much. That't not to say they're worthless, just maybe not as valuable as you might initially think.

And this leads to my greatest fear: How do you know when something is thoroughly tested? And can some sort of confidence be associated with your tests? Clearly, code coverage doesn't cut it. I'm still new to all this, but I'm not taking much comfort from unit tests. I feel a bit better when integration tests return exactly the result I'm expecting and I test several possible scenarios. Still, even with this, you cannot test all possible scenarios and when do you know you've got enough? I guess when something blows up and you didn't find it. (>_<)

P.S. by 'Mein Kampf' I just meant the literal 'my struggle' it has nothing to do with Hitler or his work.

iTunes Blows

iTunes blows, but we all knew that. The latest chapter in the suckage occurred when I deleted UppperFilters from HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E965-E325-11CE-BFC1-08002BE10318} in an attempt to solve my problem of the disappearing cd drive (it didn't). Instead iTunes lost the ability to write CDs, not something I really care about since I only use iTunes for podcasts now anyway, but the stupid error was kinda annoying. The message said to reinstall iTunes, yet neither a repair install nor uninstalling then reinstalling fixed the issue, even after I manually created the key. Apperantly there are some magic drivers in use by iTunes that their support site will tell you nothing about. I finally got it fixed. My thanks to Ralph and Google.

The Obvious

Read a quote yesterday I rather enjoyed:

by C.A.R. Hoare
There are two ways of creating a software design. One way is to make it so
simple that there are obviously no deficiencies. And the other way is to
make it so complicated that there are no obvious deficiencies.