Friday, June 3, 2016

Microsoft Quality Engineering

Go Microsoft account recovery guys.  I tried every one of their recovery methods including email, call phone, and text phone, none of which worked at all (my phone number is correct as is my email). After waiting for 30 minutes for the code to come I finally got a code from them in an email.  See below what happened.  ;-) Not only can they not actually come through on any of their account recovery methods, but they can't even use the code once they get around to sending it to me.  Quality engineering there.





Wednesday, September 23, 2015

Solving Problems with Mac OS Server Spotlight Sharing

We have a Mac server for our advertising department that contains tons of images.  They use spotlight on their Macs to search the shared volumes from the server.  There were a few folders that had somehow become stuck and new files and folders in that stuck folder would not be found when searching.

I connected to the server and found that searching on the actual server worked just fine on the problem folders.  This ruled out any spotlight exclusions for those folders since they should never be found in that case.

I decided to check the index status using mdutil

mdutil -as

Surprisingly it showed the stuck folder as having its own index and that it was working fine.  I tried to figure out how to modify the list of folders that have their own indexes and couldn't find anything about it.  The stuck folder did not have a mount independent of the main volume one, so it should not have been there.

Eventually after much searching I found that at the root of each volume inside the .Spotlight-V100 folder there is a config file that contains the configuration for the volume's indexing called VolumeConfig.plist.  For some reason this config file contained special configuration for my stuck
folder.  So I edited it and took out the offending configuration leaving only the block for the root of the volume using:

sudo vi /Volumes//.Spotlight-V100/Store-V1/VolumeConfig.plist

Once that was gone I restarted the indexing service using

sudo launchctl stop com.apple.metadata.mds

Now new files and folders added inside my stuck folder instantly appeared in the search box on a remote computer.  Existing stuff still didn't show up though so I still needed

sudo mdutil -E /Volumes/

to force it to reindex the volume

Now everything is working great!

My assumption is that at some point this folder was shared as its own volume.  This got the folder in the configuration list so that it had it's own index.  Then the volume was deleted but the index remained for some reason.  It seems that spotlight on the local system is aware of the potential for multiple indexes and searches them all, but remote searching only searches the index at the root of the volume.  Also the index util for some reason doesn't index folders with independent indexes into the root index.

I'm sure there are some bugs here, but this seems to resolve the issue.

Enjoy!!

Thursday, May 16, 2013

Making Sliced Emails Work Reliably In All Browsers

After spending the last 2 and 1/2 hours fighting my a new email with a complicated table (created by Photoshop) I have determined that the following items are crucial to avoid gaps and alignment problems in Internet Explorer, Chrome, and Safari. 

  1. Photoshop's blank 1px spacer table cells at the end of each column and row are ingenious and important.   Don't delete them (like I did) because they keep IE from screwing up the table in email clients.
  2. Photoshop puts an accurate height on the containing table which is important for IE which can't add?
  3. Wherever you see an alt tag (the images) search and replace in 'border="0" style="display:block"' which makes Chrome and Safari deal with spacing better.
With these three items I think your table will not get screwed up... knock on wood.

Monday, July 2, 2012

Everest's Disco Yeti and Politics of Engineering


I finally watched that great Discovery channel video tonight when I got on a kick to try to see the Yeti in A mode.  After watching the movements, and seeing the mechanism I really don't get it.  This cannot be an engineering issue or a money issue.  This MUST be a political issue.  Doubtless they had tons of money allocated for maintenance on this monster, and to record new movement sequence for this guy that just moved his fingers, eyes and head as shown in this video 


would not put any stress on his "cracked foundation", and would not cost anything near the amount of money they saved by having disco yeti for 5 years.  

This my friends is almost assuridly imagineering pouting that the guys in Florida won't spend the money to fix this right.  So they refuse to put in the half day of reprograming it would take to make this guy move a bit. They can't stand to have their great achievement neutered into a normal functional animatronic.  It's all or nothing! Totally smells of grown adults acting like 5 year olds.  

Now given, their fear may be justified, because if they created limited A mode then operations may never bother to return this guy to his former glory... Still can't they just work out a deal that they will do it for the sake of the show in exchange for a promise that they will fix it once Avatar land comes on line.  It is all about the show right, and not about Joe Rohde's (or someone else's) pride? 

Maybe this is why Disney doesn't seem to let imagineering do animatronics much anymore.   They give them all to outside contractors like Garner Holt from what I have heard of late.   All of Radiator Springs Racers was done by them right? Was little Mermaid internally done?  Maybe her floating hair was?  Sure their group screwed up on Murphy the new Fantasmic dragon temporarily, but at least Disney could blame someone, and not have to baby them to get a fix.

What do you think?  Am I on the right track?

Wednesday, February 1, 2012

Prevent All Non Desirable Apache Methods

So our security audit claims that I have to shutdown all Apache httpd methods but POST, GET and HEAD.  I went to the Apache documentation and they claim you should use LimitExcept.  Sounds great right, so I tried using it in all the places they allow, but it doesn't work anywhere I put it.   After scouring the web I have just given up and used something really simple:


RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^(?!POST|GET|HEAD)
RewriteRule .* - [F]

Works great (with the minor performance cost)... I wish Apache would just fix LimitExcept so it could be global.  Some of us don't use Directory entries like they expect.

Just thought I'd post though so that others may not have to suffer the same waste of time as myself.

Friday, February 25, 2011

Simple way to do a facebook like check in java


People are making this so hard.  Now facebook is moving from fbml you gotta do the like check yourself instead of using the very convenient . Here is my simple solution after 3 hours of trying the hard versions.

static final Pattern FB_SIGNED_REQUEST_PATTERN = Pattern.compile("liked\":(.)");
static final BASE64Decoder BASE64_DECODER = new BASE64Decoder();
public static boolean isFacebookFan(HttpServletRequest request)
throws Exception
{
String fbreq = request.getParameter("signed_request");
if (fbreq == null) throw new Exception("No request");
fbreq = new String(BASE64_DECODER.decodeBuffer(fbreq));
log.error(fbreq);
Matcher m = FB_SIGNED_REQUEST_PATTERN.matcher(fbreq);
return m.find() && m.group(1).equals("t");
}

Tuesday, January 11, 2011

The Firesheep Problem, and How rcwilley.com Is Protected

Recently with the new "Firesheep" firefox addon that steals facebook and twitter sessions over unsecured wifi sidejacking is in the news.  I thought I'd sit down and write about my solution to the problem which protects rcwilley.com sessions from being hacked.

A few years ago when I changed jobs into the web engineering business I was forced to get up to speed on session cookies and how they are used.   For those of you not familiar with how cookies work here is a quicks simplified primer:

Cookies are little pieces of data that a website can send to your browser.   Then whenever your browser communicates with the same server it got the cookie from, it sends it in the request.   The web server can then look at this cookie and know what browser he is talking with.  When you log into a website, it is very common for them to send a piece of data called a session cookie to your browser.  Then every time you ask for a new page they know who you are.   Now this is all perfectly secure and safe as long as you never communicate with the server over a non-secure http connection.  However most websites, after using a secure https connection to send your login and password over, switch back to http for performance.  They don't pass your password any more, but every request from your browser has the session cookie sent UNPROTECTED to the server.  That is how the server knows who you are for the rest of your visit.

Now there is a concept of a secure cookie which the browser is only allowed to send over a secure https connection, but they are hardly ever used.  This is because the user may make a request over http all of a sudden, and the server won't know who it is.

One of my first tasks was to create a single sign on kind of solution that allowed customers to log into our site and stay logged in while they moved between normal and secure pages on our site.   I got it working the commonly accepted way of using the session cookie to maintain the users login credentials throughout their visit.  This worked just fine, but I wanted to make sure my solution was secure when we switched between http and https.

After hunting around on the internet on ways to hack sessions I learned about sidejacking.  This is when someone able to watch network traffic between a browser and the web server just grabs a session cookie and uses it to pretend to be someone else.  I asked one of the top engineers at my web consultant firm how to prevent this, and he said I shouldn't worry about it because it was to hard to do a sidejack of a cookie, and no one else worried about it.  I visited many big websites, and watched their cookie usage. None of them used secure cookies.  Facebook and twitter were common examples of these kind of sites, so it seemed that my co-worker was right about sidejacking being a non-issue.

Basically web sites seem to be in the following camps:

1. Sites that are stateless and need no cookies
2. Sites that use https for login, but no secure cookie, risking an http connection sending the cookie in the open.  These are the risky ones like facebook and twitter.
3. Sites like 2  that fortunately make you login again every time you go into a secure area.  This helps.
4. Sites like 2 that make you login every time you enter a secure area, and then use a secure cookie.  Amazon appears to be in this camp.
5. Sites that are completely https and use secure cookies and suffer the performance penalties.  Mostly banks and the like.

My favorite kind of site is 2 because it doesn't irritate the user after they already logged in, but it is too risky.  I still just didn't feel right about being a type 2 site.  So for rcwilley.com I adopted a combination solution that would give me the benefits without the problems.   The simple solution is to use both secure and normal cookies, and require the secure cookie whenever the user re-enters a secure area.  It was very easy to implement and completely solves the problem as far as I can figure.   Twitter and Facebook ought to consider this method if they don't want to go completely secure like a bank.

Now some nitty gritty details for those who care.  When the user logs in, I do it over a secure request and I give them their normal session key in a normal cookie.  I don't have to mess with the default behavior of tomcat at all.  At the same time I also generate a secure cookie and send that as well.  The secure cookie value is simply stored in their session with everything else.  Thereafter they can go to an http section of the site, and back without having to re-login.   Whenever they try to make a secure request from then on I just check to make sure they also sent me the correct secure cookie as well.  Otherwise they have to log in again.  Works like a charm.