Tag Archives: development

SpiralKit

Introducing SpiralKit

A little more than a year ago, I released my first iPhone application, Spiral by Digital Generalists.  After several updates and many happy customers, I’ve decided to release the drawing engine for Spiral as an open source framework through my company, Digital Generalists, LLC..

What Is SpiralKit

SpiralKit is a Quartz 2D-based drawing framework for iOS written in Objective-C. The framework enables ‘drawing’ on a UIView in an iOS application and ships with pen, highlighter, and eraser drawing effects.

The framework is built specifically to enable easily adding new effects, color spaces, and drawing algorithms in a modular fashion. If you want to provide a unique drawing effect, support of the CMYK color space, or a custom drawing algorithm, you can add such features to SpiralKit without having to modify the fundamental components of the framework.

Where to Find It

The project is hosted on GitHub at https://github.com/digital-generalists/spiralkit.

More Information

See the release post on the Digital Generalists site and the framework documentation for all information about using SpiralKit in your application.


Blueprints

Evaluating a Proposed Application Architecture

At work, I’ve recently had a few conversations about the best way to evaluate candidate architectures for a software project, so I decided to put down a few ideas I have on the topic.

Project Intake

Firstly, there are several questions that need to be answered and understood by all stakeholders of a project before you can properly evaluate an architecture:

  • What are you planning to build?
  • Why are you building it?
  • Who are you building it for (i.e. who is your audience)?
  • How will success be measured?

At a technical level, most projects start with an answer to the first question and have a weak or vague understanding of the other three. However, if you can’t answer those three questions, you don’t have a good justification for the answer to the first one.

The first task is to get agreement on the answers to Questions 2 through 4 and verify the answer to Question 1 still fits. If it doesn’t, use the opportunity to identify what can be built that fits the needs of the answers to Questions 2, 3, and 4.

Business Environment Analysis

Most technical ecosystems have processes and expected technical standards that apply to all applications within the ecosystem. Your organization does things a certain way.  Your team likely has expertise in a specific technology.  Your business also likely has requirements around protecting the environment and facilitating support of the application.  These factors should be identified:

  • How are applications expected to authenticate and authorize users?
  • Are there standards around data security?
  • Are there standards around data retention?
  • Are there standards around transport protocols?
  • Are there expectations around supported platforms?
  • Are there standards around accessibility?
  • Are there standards around logging?
  • Are there standards around analytics?
  • Are there expectations or standards around interoperability across applications?
  • Are there standards regarding UI conventions?
  • Will the new product provide technical challenges to existing distribution and delivery models?
  • Will the new product provide technical challenges to existing support mechanisms?
  • Are there standards around code documentation?
  • Will the new product provide technical challenges to existing documentation procedures?

The answers to these questions will identify many of the inherent technical requirements of the system. These aspects are related to the notion of certifying the application as “good and complete” as your organization defines that term, even if informally.  If there is no formal certification process, either within your company or externally, for the application, answering these questions will help define what “good and complete” means to your product.

Technical Environment Analysis

The next step is to identify what you already have in place and how the new product will fit into the overall ecosystem.  Few applications are 100% greenfield with zero dependency on existing systems, so it is paramount to understand what environment you will be working with.

  • What systems are already in place?
  • What role will those existing systems play in the new product?
  • How will the new system affect seemingly unrelated products?
  • What types of access privileges are required to interact with these systems?
  • What integration facilities already exist in the identified systems?
  • What data does the new application need and where can it pull that data from?
  • In what formats are the data accessible?
  • What second-level dependencies exist?

Technical Architecture Review

At this point, we should have enough information to apply a proper architecture review. Firstly, you verify the proposed architecture addresses all of the concerns identified in the Business and Technical Environment analyses.

After those business-related concerns are addressed, most of the remaining aspects of the architecture will be technically focused.  However, technical concerns tend to be highly domain and context specific.  The meaning of “good and complete” when it comes to these technical concerns will be influenced by the programming language, platform, organization, and product. In general though, the following concerns tend to apply regardless of context and can be used as a base set of technical concerns to review regardless of project:

  • Separation of Concerns: Each component of the system should only do one thing and do it well. In practice, this minimally means ensuring a strong separation between the business logic implementation and the UI.
  • Abstract Interfaces: Are the touch points between the system integration points and product components done in a way that effectively hides implementation details of both sides of the integration?
  • Scalability: Can the proposed implementation easily scale to accommodate concurrent execution of the same logic processes across different data sets?
  • Maintainability: Is the architecture well reasoned, consistent, and easily modified?

When Should Architecture Reviews Occur?

You do not want an architectural review to impede the development process. Your goal is not to create the “perfect” architecture.  Your goal is to build a solid product.  However, you want to identify issues with a proposed architecture early enough to do something about any problems an architecture may have.

  • Ideally, a review of a proposed product architecture should occur before any coding has begun. (This is nearly impossible to achieve in my experience.)
  • A more realistic ideal is that a full architecture review should be conducted as soon as a potential product’s prototype or alpha implementation is minimally functional.
  • At a minimum, a full architecture review must be conducted as soon as a proposed product reaches the release candidate stage if one has not been completed earlier.
  • A full review should be conducted once prior to the product’s initial release. As soon as this occurs, summary reviews of the architecture will likely be sufficient at subsequent steps of development and maintenance.
  • For ongoing development, a summary architecture review should occur at the start of each new release and again as part of the project’s release criteria (ideally at the end of the last development sprint) or as soon as the first release candidate is identified.

A Reality of New Product Development

Most new products/applications start out as skunkworks projects. Typically an individual or a small group builds a product prototype and uses that prototype to gain approval/funding for the project.  As a consequence,  a new product often starts life with a sizable, un-vetted code base and significant technical debt. The process for bringing these products to production involves:

  • Conducting a full architecture review.
  • Documenting gaps in all of the aspects identified above.
  • Conducting a full code review.
  • Identifying all technical debt as issues/defects in your project tracking system.
  • Prioritizing architectural gaps and identifying gating issues.
  • Creating a plan to address all non-gating issues in a timely manner.
  • Verifying that all gating issues are addressed before the product is released to production.

The process I’ve laid out is intentionally light on specific procedural details because each organization is different and each technology stack and platform carry specific technical concerns.  The mechanics of a formal architecture review process need to be crafted based on the specific needs of your organization and the technology you are using.

However, this outline should provide a good abstract notion of what goes into an architecture review.  The items and concerns I’ve outlined probably aren’t comprehensive.  If you feel I’ve overlooked something important, please mention it in the comments.

Markers

Marker Interfaces

Marker interfaces tend to be one of those things that your don’t think about often.  But when you have a problem they can help you solve, you think they are awesome.  The trick is knowing when they should and should not be used.

What is a Marker Interface?

Marker interfaces are basically an interface contract with no methods or properties specified.  The notion of interface contracts are common in many languages but take on many different names.  If you’re familiar with Java interfaces, Objective-C protocols, C++ pure abstract classes, or any similar technique in your language of choice, you know what I’m referring to.

Typically, an interface contract specifies a set of methods that must be implemented by any object that intends to support the contract.  This enables you to refer to an object by the contract (which typically corresponds to a functional role or feature role in the program) rather than the concrete type of the object itself.  This both clarifies how code consuming an object intends to use the object and makes it possible to easily use different concrete classes that also implement the contract without needing to change the consuming code.  The consumer doesn’t care about the concrete object type as long as it satisfies the contract.

So why on Earth would you ever want to create a contract that doesn’t specify any methods?  Most of the time, you don’t. But if you need a way to abstractly specify an object type, especially when the abstract type may have concrete instances that don’t share a meaningful inheritance chain, then marker interfaces are likely a perfect solution.  You most often have a need to specify an abstract object type like this when you have orchestration or coordination code that is transferring objects from one part of a program to another.  The orchestration code doesn’t need to use or modify the object, it just needs to move it from one place to another.

Such orchestration code tends to be both fairly abstract and highly leveraged within an application’s architecture, so you often don’t want to specify concrete types for the objects being sent back and forth.  A simple way of generically passing objects is to specify the parameters of methods in the orchestration code as generic objects.  But specifying transfer method parameters as a generic object has a few downsides.  Firstly, it doesn’t provide any guidance to the caller about what should be passed.  Secondly, literally anything can be passed.  If you accidentally pass an object you didn’t intend to, the compiler/parser won’t catch the error.

void transfer(Object data)

Marker interfaces help avoid these problems.

By using a marker interface, you can specify the intent of what should be provided to the method without restricting you to a specific type or inheritance chain.

void transfer(IFooDataObject data)

This method signature is much clearer to the person using the method what type (in the functional intent sense of the word) of object should be passed to the method.  Any concrete type can still be passed, but you have to make a conscious choice about which objects should be allowed to be passed by adding the marker interface to the type definition of the object before you can pass an object of that type. In a mature architecture, prior decisions about which objects carry this declaration can greatly minimize confusion about what to send where.

You also get a level of type checking via the compiler, so if you accidentally pass a reference to a UI control object instead of the data property on the control you intended to pass, the compiler can let you know.  Without the marker, it wouldn’t.

This technique of using marker interfaces for high-level type specification is really the one application of marker interfaces I use frequently.  I tend to not like using marker interfaces to direct flow of control within an application (i.e. “if foo implements bar then do x”) because I think such control flow is typically better governed by object state than by type. But that thought is probably worth a post of it’s own.

Clearing a UIWebView’s Browser Cache

While writing a hybrid native/web application for iOS, I encountered a problem I never even bothered to consider before starting:

How do I clear the browser cache for the application?

Clearing cached HTML resources, scripts, and stylesheets is the type of browser-provided functionality that is easy to take for granted. However, when writing an application with a hosted UIWebView, you’re suddenly solely responsible for handling little issues like this.

Thankfully, programmatically clearing the browser cache for an iOS application is simple:

[[NSURLCache sharedURLCache] removeAllCachedResponses];

If your also interested in clearing out cookies, then also include this:

NSHTTPCookie *cookie;
NSHTTPCookieStorage *storage = [NSHTTPCookieStorage sharedHTTPCookieStorage];
for (cookie in [storage cookies])
{
[storage deleteCookie:cookie];
}

When and where to invoke this functionality is really up to the particular circumstances of your application.

While clearing the browser cache is the right approach for certain problems, it’s not the answer to every cache-related problem.  If you’re interested in bypassing the caching of web resources all together, be sure to look into NSURLRequest‘s NSURLRequestCachePolicy rather than aggressively clearing the application’s cache.

Why Won’t Internet Explorer Change the Appearance of an HTML Element I Changed via JavaScript

A very common technique in modern web applications is to dynamically change the appearance and/or behavior of an HTML page element via JavaScript. Often, this is done by modifying the style property of the object to apply different CSS rules to the element or by changing the layout and dimension properties inline.

The Problem

While building an application like this, you’ll likely quickly discover that Internet Explorer (at least the ~version 8 variety) doesn’t apply some of these changes when you expect, and often need, it to. The change will either appear to be applied at some random point significantly after the change to the style is made or appear to never be applied at all.

The Apparent Cause

What’s actually happening appears to be a performance optimization in these older versions of IE. It appears that the browser makes the explicit choice to defer some style changes for as long as possible. From what I can tell, the browser assumes that the layout of the page should stay largely static until something that would significantly impact the layout happens such as resizing the window.

At least that’s what appears to be happening. I don’t know specifically what decisions were or weren’t made by the development team. But I find that viewing the problem in the way I describe above helpful in understanding what is happening and how to work around the behavior.

How to Cope with the Issue

A solution to this issue is to “nudge” IE into applying the styling changes when you want. The way to do this is to reference a property in the element’s style property that IE believes it requires styling changes be applied to properly calculate. Referencing the offsetHeight property seems to work rather well in most cases.

var nudgeByReading = elem.offsetHeight;

Simply reading the property may be enough to trigger a re-layout.  However, simply reading the property often isn’t enough.  Frequently, you’ll first have to change the property and then reference it to force the re-layout.

To force a redraw regardless of style changes, you can write a method that makes an innocuous change to the style, read the property, and then reverse the “fake” style change, like so:

var savedDisplayStyle = elem.style.display || ”;

elem.style.display = ‘none’;
var nudgeByReading = elem.offsetHeight;

elem.style.display = savedDisplayStyle;
nudgeByReading = elem.offsetHeight;

To be safe, you may want to consider universally reading and writing to the property to force the recalculation just in case simply reading from the property doesn’t force the redraw.

Some Notes on Applying the Technique

One thing to remember when using this technique is that changing the layout of a parent element will force a layout calculation of each of the parent’s child elements. So, if changes are made to multiple elements that have a common parent, forcing a redraw of the parent will cause a redraw of each child.

Doing this may result in cleaner, more resilient code. But do keep in mind that forcing a redraw at a point too high in the DOM hierarchy (such as the “body” tag for example), will cause most or all of the page to redraw which may cause performance problems. As a rule, it’s a good idea to redraw as few elements on the page as possible.

How to Convert a String Representing a Unicode Character Sequence to the Unicode Character

I recently received some translated resource files from the Translations team at work.  To my surprise, all of the files, even those for double-byte languages, were returned in ASCII encoded files.  After some inquiry, I found out that because of the technical limitations of a proven legacy system, all translation files were encoded as ASCII.  What this meant is that I was confronted with a set of ASCII text files containing Unicode escape sequences (\uxxxx) that I was responsible for converting to a proper Unicode encoding.

While solving the problem, I came across a couple solutions for converting Unicode escape sequences to a different encoding.  The first was to use the StringEscapeUtils class in Apache Commons Lang.

String lineOfUnicodeText = StringEscapeUtils.unescapeJava(lineOfASCIIText);

Using the StringEscapeUtils class is very straightforward; simply read the contents of the the ASCII file line-by-line, feed the line of data in to the unescapeJava method, and write the unescaped text to a properly-encoded new file.  But this technique requires writing a utility program to feed the contents of the ASCII files into the StringEscapeUtils methods and then write the transformed string to a new file.  Not hard to do, but much more work than ideal.

The second solution is to use the native2ascii utility included with the Java JDK.  The utility can take the input file and perform effectively the same unescape transformation that Apache Commons does.

native2ascii -encoding utf8 c:\source.txt c:\output.txt

A very simple solution that works as advertised.  No quirks or caveats that I’ve noticed.  There’s even an ANT task for incorporating native2ascii into build scripts.

Setting a Custom User Agent in Objective-C

Sadly, it is a common scenario in web development to have code that handles specific browsers, or classes of browsers, differently.  The practice largely has it’s roots in the “bad-old-days” of having to handle the many various quirks and idiosyncrasies in the different CSS and layout engines of the major browser vendors.  Thankfully, this problem is getting, largely, better (at least in my experience), and the need to write browser-specific code of this type is becoming less of an issue.

But handling layout differences across browsers isn’t the only reason to treat different clients uniquely.  Dropbox’s matching of the sort order paradigms of either Windows or Mac depending on which OS the site is being viewed on is a practical example of when functional differentiation is desired for different clients.

Dropbox on Mac

Dropbox on Mac

Dropbox on Windows

Dropbox on Windows

These techniques are nearly always implemented by inspecting the user agent string provided in the headers of each request the server receives.  Each browser provides a user agent header that describes the type, version, OS, and other relevant details about the client sending the request.  For a web application, the client application is the web browser, so you don’t need to worry about specifying one, the browser provides it. You only need to worry about consuming it if necessary.

But what about hybrid native applications where mobile content is running within a web browser control (such as UIWebView in iOS) within a native application.  By default, the web browser control typically sends a subset of the user agent string the full-browser version would send.  But what if you want your server code to recognize when your native application is submitting requests?  How can you specify details about the native application in the user agent string if you need the backend application to behave differently for these hybrid clients?

In the iOS SDK, the answer is to modify the UIWebView’s user agent string for your application.  This is easily done by:

– (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
NSString *existingUserAgent = [[[UIWebView alloc] init] stringByEvaluatingJavaScriptFromString:@”navigator.userAgent”];

NSString *newUserAgent = [NSString stringWithFormat:@”%@ custom-information-about-my-application”, existingUserAgent];
NSDictionary *userDefaults = [[NSDictionary alloc] initWithObjectsAndKeys:newUserAgent, @”UserAgent”, nil];
[[NSUserDefaults standardUserDefaults] registerDefaults:userDefaults];

/* … your application code … */

}

The essence of the above code is that the existing UIWebView user agent is retrieved from a UIWebView instance (it doesn’t need to be the instance that will actually display the content), the custom user agent information is appended to the original user agent, and then the new user agent is registered to the user defaults of the application under the key UserAgent.  This will set the user agent sent for all network requests sent from your application’s code, including raw NSConnection requests.*

In my experience, you should always augment the existing user agent rather than completely replace it as the server will likely make some assumptions about whether you are a supported client and what flavor of display code to deliver based on the web view control’s existing agent string.  This is especially true in large applications where you don’t own every aspect of the server code.

* NOTE:  Things can get a little weird for requests issued from a linked library (like making an API call such as stringWithContentsOfFile).  Therefore, it is dangerous to assume that EVERY server connection issued by your application will carry the custom user agent. Your milage may vary, so verify that the custom agent is being applied uniformly for all calls and adjust how the agent is registered if needed.

Fun with Simulators and VMs – Can’t Delete an Application from the iOS Simulator

For the most part, Apple’s simulator environment for iOS (both the iPad and iPhone), does a pretty good job.  There are mountains of odd “simulator-only” bugs and some obvious features are simply not implemented (three-finger+ gesture emulation anyone?).  However, as anyone who has worked with device emulators other than iOS’s can likely attest, Apple’s offering more than holds its own in terms of features and usability.

That said, the quirks and bugs can drive you mad.  The latest one I’ve noticed is that I am no longer able to uninstall an application from the simulator using the iOS “uninstall” feature (i.e. tap and hold the application’s icon until it starts to shake, then press the ‘X’ in the corner to remove the application).

This isn’t a big thing as re-running from the simulator will reliably (in my experience) replace the existing binary on the simulator.  However, there are times when I want to completely remove the application to remove things like the Settings bundle or resource and data files stored in the bundle directory of the application on the simulator.  The easiest way to do this had been to uninstall the application via iOS itself.

However, in the 6.0 or 6.1 SDK (I’m not sure when I first noticed, and I’m not motivated enough to track down the specific release the bug was introduced), the “uninstall” feature stopped working.  Tap and hold will still cause the icons to shake and the ‘X’ to appear, but pressing the ‘X’ causes the simulator to lock up.

The work around to this is simple.  You can just delete the application bundle directory from the simulator’s install directory on your hard drive.

Open the terminal and cd to:

cd “~/Library/Application Support/iPhone Simulator/6.1/Applications/”

Change the “6.1” in the above path to the version number of the simulator where the application you want to remove is installed.  If you issue an ls command in this directory, you’ll see a list of directories with GUIDs for names.  Each directory represents an application deployed to the simulator.  Find the directory for the application you want to remove and delete the directory

rm -r <GUID>

Nothing complicated, and for people comfortable with the terminal at least, an easy method for clearing a deployed application from the simulator.

Resetting the Undo/Redo Stack after Applying an Undo/Redo Action in iOS

If you are developing iOS applications and are not using NSUndoManager, you should be.  The class provides a very solid implementation for enabling undo/redo functionality within an iOS application.

However, when I first wired the class into a project, I quickly noticed that changing a value after navigating backward into the undo stack did not clear the now-downstream, previously-applied undo stack operations.  Which is to say, if I undid an edit and then changed the value of the “undone” field, I could redo to the value prior to the undo.  Which, much like that last sentence, is very confusing and hard to follow.

Most undo/redo implementations assume that applying a change after navigating backwards in the undo stack should invalidate the forward operations within the stack.  So while I was surprised that wasn’t happening right out of the box with NSUndoManager, I assumed it was capable of the behavior (i.e. I must be doing something wrong).

And I was correct.  Straight out of the box, NSUndoManager assumes that all undo operations are non-discardable, which is to say, it will preserve them even when the state of the class hints that the operation may no longer be valid.  To enable the functionality of having the traversed undo operations removed from the redo stack once a modification is made to the “undone” data value, I simply had to indicate that the applied undo operations should be treated as “discardable“.

A very straightforward, and possibly overly-aggressive, way of doing this would be to subclass the NSUndoManager and override prepareWithInvocationTarget as so:

– (id) prepareWithInvocationTarget:(id)target
{
// This ensures that the redo stack is reset if the user
// edits a field after moving back in the undo stack.
[self setActionIsDiscardable:true];
return [super prepareWithInvocationTarget:target];
}

As is often the case with software development, your project likely has some characteristics that are very specific to your application, so the above code likely is not be a perfect fit to your problem. However, if you’re looking for a way to reset the redo stack of NSUndoManager, the above code should provide some guidance on how to develop a solution that fits your needs.

Extension Cords

The Power and the Pain of Plugin Development, Part 3

Previously, we looked at why a client-side plugin architecture might be a good idea and why it might not be.  Now, let’s formalize our recommendations.

Some Final Thoughts on When You Should and Shouldn’t

Now that we’ve discussed some of the benefits and pitfalls of taking on plugin development, hopefully you have a better feel for when leveraging a plugin is appropriate for your project. However, there is an additional aspect that I factor into my evaluation:

  • perceived appropriateness for the platform

Basically, this factor is meant to gauge both how frustrated your users will find the idea of a plugin and how friendly the general developer community is to the idea of plugins for the platform category to which your host application is perceived to belong.

This is how I break down platform categories as they apply to this particular question:

  • desktop software built on the idea of plugins
  • desktop software that has a plugin API, but isn’t plugin-oriented at it’s core
  • web browsers

Desktop and Based on Plugin as the Core

Software of this variety tends to be complicated, valuable, and used for sophisticated work. The most well known examples are Adobe Creative Suite and Eclipse. Less prominent members of this class are smaller, albeit popular, applications such as Emacs and command shells on various platforms. The products are built in such a way that core product development is often done as plugins. In these ecosystems, plugins are not just encouraged, they are expected.

For disciplines such as media design and software programming, these nuanced products tend to have highly technical users asking for a solution to very specific problems. In environments such as this, plugins tend to flourish. If this is where you’re project is targeting, you should have few concerns about the appropriateness of using a plugin as the delivery channel.

Desktop with an API but without Primary Orientation

This is, more often than not, the Microsoft Office case. While there are other products that fit this category, Office is the prominent player. Typically, there is a very polished extension API and even a large and committed community around extending the products. But unlike the more technically-oriented applications mentioned above, the technical fluency of this class of software’s users can vary widely.

Plugins can be successful and popular here. The appeal of having software do new and cool things is a universal draw. You don’t have to be a geek to love a new feature.

However, because the broader audience tends to be less tech-savvy, UI, delivery, and support decisions typically need to be more conservative. In general, you want to make sure your design and marketing message are closely aligned with the attitudes of the people likely to install the plugin.

Because the user of these applications tends to be less technically-oriented, the place of plugins in the ecosystem tends to take more of a secondary role. Your plugin would need to become phenomenally popular to give it comparable clout with the internal Strategy and Architecture teams at the host application. On these platforms, you must be prepared, and frankly expect, to rewrite the plugin often and abruptly in these ecosystems.

Web Browsers

Early in my career, I tended to think of web browsers in the same class as the “Desktop with an API but Nothing More” camp. The browser was an extremely widely-used desktop application with a good extension API. However, over the course of time, I’ve significantly revised that assessment. Browsers are in a class of their own.

Because browsers tend to lag only operating systems in terms of usage, browser plugins come up very often on product roadmap wish lists. However, the size of the audience is almost always more of a curse than a blessing.

One of the most resonant messages uttered by advocates of the “pure web” movement has been that things on the web should be free and that they should be universally available. I have been shocked often at how articulately non-technical users can attack any bit of software that doesn’t fit the free and universally available model of the pure-browser-delivered web. Even if your plugin is free, it will run afoul of the “universally available” purists if it doesn’t seamlessly run on every platform without any user interation.  Just ask Adobe.

Personally, I think this frustration often boils down to the fact that most people don’t care about technology, only the things technology can do for them. So any point of view that affirms those two notions is innately appealing.

Because of that, most of that uber huge audience of browser users tends to be:

  • annoyed that they should have to install anything for any !@@#-ing reason
  • skeptical that anything they are asked to download and install is actually free or safe

Installing software can be a pain, especially on a system like Windows that often insists the user close an application or reboot the machine to complete an install. Users expect access to a site’s information or functionality to be available as soon as the page loads.  Anything that gets in the way of that is an annoyance, plugins especially.

In general, I’ve found web browsers are an unforgiving platform for plugin development. Expectations for cross-browser support and the ability to override or circumvent security mechanisms tend to run pretty high. Because the zero-friction, potentially cross-platform delivery channel of HTML is available literally right in the same application, the question of ‘why aren’t we doing this as a web application?” essentially never goes away with web browser plugins. Because of this, in my opinion, browser plugins should be avoided.

Conclusion

That’s my take on plugin development.  Hopefully the information will help you in determining whether a plugin is the right choice for a software project.  Like I said above, this series is based on my experiences in building plugins and not intended as gospel truth.  Like all things in software, your mileage may vary.