Thursday, March 26, 2015

New Ephor version 1.2 with lots of good stuff

Today we have released version 1.2 of Ephor. It contains a few quite exctiting new features.

CSV Import


The first feature of note is the ability to import your assets from external source, using CSV import. This was the most asked-for feature in Ephor. What you need to do is create a table of assets using your favourite spreadsheet, export the data to a CSV format, and import the CSV file to Ephor. This way, you can quickly migrate to Ephor from your legacy method of storing assets information.

Below is a screenshot of the feature


PC Collector Improvements


The "PC Collector" functionality, which uses Java-based computer scanner to collect information about user's machines and send them to Ephor received a bit of a facelift. We have changed the way the scanner is launched - before, it used "Java web start" technology to launch an external Java application. This caused problems, especially on OS X, where all applications launched from the web need to be signed in a way that made an optimal user experience quite impossible. So instead of an external application we are now using a Java applet embedded in Ephor web page. 

You can see it in action on the screenshot below


Restricted Read-only Users


We have added a new user role to Ephor Standalone: "Restricted User". Restricted users are able to browse assets, but only in read-only mode, which means that they cannot modify contents of assets. However, they can still create JIRA issues and link them to assets - that way, they can report problems related to their equipment.

In Ephor for JIRA (and Service Desk), we have mapped the "Restricted User" role to configurable set of JIRA groups. This gives Ephor for JIRA administrator the ability to natively control who can edit assets.

Related to that, all Service Desk "customer" users are given the "Restriced User" role. 

In addition to that, we provided the ability to also restrict who can modify the "Asset" custom field in JIRA.

Primary Assets


We have added the ability for every user to nominate one of your assets as their "primary" one. For example, you can make your main work machine "primary":


Primary assets can be quickly accessed from Ephor search page:


and also, in Ephor for JIRA, they can be quickly selected when filling in the "Asset" custom field:




Improved JQL


In Ephor for JIRA, searching for issues (using JIRA's JQL language) associated with an asset was cumbersome - you had to know your asset's numerical ID. Now the support for JQL has been vastly improved. In addition to supporting numerical ID, we let the user search for issues using the full power of Ephor native search queries: you can specify things like your asset name, MAC address, IP address, owner and pretty much anything else. 

See below for an example query for JIRA issues associated with assets of type "Computer"


See this page for a full description of this functionality.

Support for JIRA Graphs


We have now made the Ephor for JIRA's "Asset" custom field "stattable" - which means that it can be used as data for JIRA graphs. This gives you the ability to plot graphs of the number of issues per asset.

See screenshots below for the preview of some of such graphs:



We are going to elaborate on the statistics support in Ephor for JIRA in subsequent releases, providing you with ability to plot more advanced graphs, related to Ephor asset types and other criteria. Watch this issue for more details.

TFS4JIRA Synchronization Filters

TFS4JIRA, our second best-selling plugin, has just been released with new, highly requested feature - Synchronization Filters. We got feedback from many of our clients that they want to somehow limit which issues and work items are synchronized and now they are able to do that. Typical use case for Filters feature are situation where given issue is marked with some label - only issues with this label will be synchronized. Detailed information about how to use Filters can be found in TFS4JIRA documentation.





Along with new Filters feature we introduced also new design for TFS screens. We named it using super-fancy abbreviation: "TUI" (TFS4JIRA User Interface). Profile Wizard and Synchronization Filters are the first screens created with this new layout but eventually all screens will be using it.


Thursday, March 5, 2015

Debugging concurrency issues on a JIRA plugin case.

The goal of this blog post is to share experiences regarding analyzing and debugging concurrency issues that occured after integrating JIRA Plugin with Atmosphere library. Perhaps a few tricks might be useful for casual developer, perhaps some of them could be avoided or replaced with better techniques. Anyway, the audience of this blogpost is the developers, so expect gory technical details.

Background.


I'm a part of the team that develops a new plugin from scratch for the JIRA Agile application. The plugin main functionality is to allow people placing estimations on the issues - in the realtime! So whenever you're participating in the planning session you could instantly see in your JIRA what are the estimations from your colleagues - all that without the need of bringing those passé poker cards to the session, or without using external application for managing the voting. Anyway, one of the non trivial development tasks was to implement the 'real-time' communication between different web browsers - for example if user A place a vote we would like to transmit that information to the user B and show him the vote (without refreshing the page). Technically we could implement all the stuff (frontend, backend) by ourselves, but being a smart life-experienced engineers we reached for existing open-source solutions: the Atmosphere library. After all it should save us a lot of time: no wheel reinventing, no corner case analysis - somebody has spend his/her time already on that.


The Story.


Unfortunately for us, after managing to integrate with the aforementioned library in our JIRA plugin we started to experience very  UNSTABLE  webdriver test runs on our build server. Yes, I know, webdriver tests are destined to be unstable, but nearly all of the tests were failing/erroring out with unpredictable scheme. That's not acceptable! So after a bit of astonishment (our plugin seemed to be OK when manually tested) we needed to admit that there is something sinister happening. In the end it turned out to be a concurrency issue: in a non-deterministic condition two threads were writing over the same object, while they shouldn't do that. Quite a trivial issue, but due to the complexity of involved components (Tomcat, Atmosphere, JIRA, our plugin) it took me a few weeks to nail down the problem. And so I decided to share you a few of my experiences.


Initial analysis.


Ok, so as I said before, the initial symptom was that our webdriver tests were failing. After inspecting build logs and screenshots from the failed tests, these fact became obvious:

1) the browser usually failed to load the page completely - in the screenshots we could see white (empty) space, without any html loaded.
2) sometimes the browser failed with the following screenshot:
Yuck, not very much informative.


3) build logs revealed that something wrong was happening with the http communication, as following exceptions were observed in the logs:
12-sty-2015 19:13:40 [INFO] http-bio-2990-exec-1 ERROR admin 1153x85x1 8a9ocx 127.0.0.1 /secure/RapidBoard.jspa
    [com.atlassian.velocity.DefaultVelocityManager] Exception getting message body from Velocity:
12-sty-2015 19:13:40 [INFO] java.lang.IllegalStateException: getOutputStream() has already been called for this response
12-sty-2015 19:13:40 [INFO]  at org.apache.catalina.connector.Response.getWriter(Response.java:639) 

or these ones:
12-sty-2015 14:16:48 [INFO] SEVERE: Servlet.service() for servlet [default] in context with path [/jira] threw exception
12-sty-2015 14:16:48 [INFO] java.lang.IllegalStateException: Cannot forward after response has been committed
12-sty-2015 14:16:48 [INFO]   at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:349)
I'm sparing you the details of the full exception stacktrace - it was pointing out that the exception came somewhere from the org.apache.catalina.* internals. Not good.

Ok, so the initial observation was that something is corrupting Tomcat (org.apache.catalina.*) Response objects - but who and why?

Let's debug.

To make things worse the problem appeared to be very tentative - it manifested itself nearly on each build run (where all the test are executed), ok, good, but locally you wouldn't have such luck - there was no steps to reproduce the issue in a deterministic way. So, what were the options? I concluded that it might be good idea to attach IDEA debugger to a running application and run the tests in a loop. I hoped that at some run the exception will occur and I will be able to inspect things a bit with debugger. Ok, so the first task was to put breakpoints at the right code lines. The logs from the builds contained the relevant code parts, so it should be as simple as doing CTRL+N CTRL+N and entering, for example, "org.apache.catalina.connector.Response" and go to line 639 (see the first log I've attached above). So simple.

Unfortunately, the Tomcat code (org.apache.catalina.*) is not used explicitly by our plugin - of course, eventually the plugin runs in the JIRA environment, which in turn uses Tomcat as a servlet container, but IDEA debugger has no idea of it! So we need to add proper dependency in our project's pom.xml for making things obvious to IDEA: first we deduct which version of Tomcat is used:


...then, we might add a special maven profile in our pom.xml and enable it on our devbox:
Quite a neat trick - you can even commit that to a shared repository without fearing that you will disturb your co-developers 


Breakpoint catching.


With above directives IDEA is now able to see Tomcat sources. We can place breakpoints in the exception triggering places and start looping webdriver tests. Whew. A few coffees later breakpoint triggers: the JVM threads are frozen and we can inspect the situation prior to throwing IllegalStateException. As one might expect - there is nothing interesting: after inspecting the Response object (I'll spare the details) we can clearly see that it is 'committed' - whatever that means, but there is no trace of who did that. Let's check how Response.setCommitted() looks like (this is the only place where 'commited' flag is changed):

org.apache.coyote.Response.java:
215  public void setCommitted(boolean v) {
216      this.commited = v;
217  }


A simple setter. If only we could see the trace of who was executing last time this setter. Fortunately, IDEA breakpoints have nice functionalities - let's exert them! We do right click on the breakpoint:





...and there we can:

1) turn off thread suspension - because we don't want the application to stop on each "setCommitted" operation - there are thousands of them being correct while usually only one is wrong. We don't know upfront which one.
2) log evaluated expression - where we can execute one-line java expression, so we can start logging something like:
Arrays.toString(new Exception().getStackTrace())
or we can even use existing object structures and methods to keep IDEA console clean:
setHeader("tony_halik", Arrays.toString(new Exception().getStackTrace()))
And later retrieve the stacktrace of the offending thread via the debugger, when the IllegalStateException does occur again:


...and reformat it in whatever text editor:



This allows to see what did commit the Response object - it is actually some code of the Atmosphere library (org.atmosphere.cpr.*), which we call during processing http request in our plugin (com.spartez.pokerng.atmosphere.SpartezAtmoshpereFilter). However, analyzing the relevant code of those classes leads to no conclusion. We need more data.

Tomcat threads.


At this moment I didn't understand clearly which threads were involved. I knew which thread was throwing the IllegalStateException, because I could see it clearly in the debugger:


But was it the same thread that set 'commit' flag some time before? I weren't sure. To better understand what is going on I've decided to use old-fashioned System.out.println() technique and collect information about relevant threads and their behavior. I've rebuild Tomcat from scratch with a few additional System.out.println(Thread.currentThread().getName() + " " + System.identityHashCode(someObject))  statements. The Thread.currentThread().getName() is enough to give the answer for the "which thread" question, while System.identityHashCode(obj) is enough to give the answer for the "which object" question. Not going into details of how I've chosen where to put those statements, let's just say that after some trial & error I've had quite a decent activity logs, like the example below:
(...)
[INFO] http-bio-2990-exec-6 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@21839092 sendHeaders()
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@21839092 setCommitted()
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@21839092 recycle() null
[INFO] http-bio-2990-exec-6 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@21839092 recycle() null
[INFO] http-bio-2990-exec-6 SocketProcessor.run() - skonczyl sie
[INFO] http-bio-2990-exec-6 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@1413693b sendHeaders()
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@1413693b setCommitted()
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@1413693b recycle() null
[INFO] http-bio-2990-exec-6 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@1413693b recycle() null
[INFO] http-bio-2990-exec-6 SocketProcessor.run() - skonczyl sie
[INFO] http-bio-2990-exec-6 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@44dfe44e sendHeaders()
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@44dfe44e setCommitted()
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@44dfe44e recycle() null
[INFO] http-bio-2990-exec-2 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@44dfe44e recycle() null
[INFO] http-bio-2990-exec-2 SocketProcessor.run() - skonczyl sie
(...)
This is normal behaviour of Tomcat threads that handle the http requests from the web browsers - usually the communcation goes like this:
1) new request has come from the web browser (via TCP) and Tomcat should handle that
2) a thread is waking up from the pool - in the example above these threads are named "http-bio-2990-exec-X", and I've logged wake up with the "obudzil sie" message written to the System.out.
3) a thread is handling the request - it passes it to the underlying servlet (JIRA side) or whatever, and it uses Tomcat's org.apache.coyote.Response to deliver http content back to the web browser.
3.1) in the example above you can see that http-bio-* threads are serving the headers (sendHeaders()) which result in setting committed flag. I weren't logging other activity as I wasn't interested in it.
4) after finishing processing the request - the relevant resources (in our example: Response object) are recycled and the thread goes to the pool ("skonczyl sie" message)
...rinse and repeat until there are http requests to serve.

Tomcat recycling.

Above example shows how the causal activity looks like. However, I noticed that our IllegalStateException was usually preceded with (more-less) following activity:
[INFO] http-bio-2990-exec-10 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-6 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-10 org.apache.coyote.Response@52e93037 sendHeaders()
[INFO] http-bio-2990-exec-10 org.apache.coyote.Response@52e93037 setCommitted()
[INFO] http-bio-2990-exec-10 WARN anonymous 805x1914x2 1p39yc7 127.0.0.1 /plugins/servlet/jpp-ng/atmosphere/board/1
[INFO]   [org.atmosphere.cpr.DefaultBroadcaster] Duplicate resource bee1c112-4f54-46dd-ae60-7a9360a0aea3. Could be
[INFO]   caused by a dead connection not detected by your server. Replacing the old one with the fresh one
[INFO] http-bio-2990-exec-3 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-3 org.apache.coyote.Response@59909aec recycle() null
[INFO] http-bio-2990-exec-3 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-3 org.apache.coyote.Response@59909aec recycle() null
[INFO] http-bio-2990-exec-10 org.apache.coyote.Response@59909aec sendHeaders()
[INFO] http-bio-2990-exec-10 org.apache.coyote.Response@59909aec setCommitted()
[INFO] http-bio-2990-exec-3 SocketProcessor.run() - skonczyl sie
[INFO] http-bio-2990-exec-3 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-3 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-3 org.apache.coyote.Response@6d6935c4 recycle() null
[INFO] http-bio-2990-exec-3 SocketProcessor.run() - skonczyl sie
[INFO] http-bio-2990-exec-10 SocketProcessor.run() - skonczyl sie
[INFO] http-bio-2990-exec-6 org.apache.coyote.Response@21839092 setCommitted()
[INFO] http-bio-2990-exec-9 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-9 org.apache.coyote.Response@52e93037 recycle() null
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@52e93037 sendHeaders()
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@52e93037 setCommitted()
[INFO] http-bio-2990-exec-9 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-9 org.apache.coyote.Response@52e93037 recycle() null
[INFO] http-bio-2990-exec-9 SocketProcessor.run() - skonczyl sie
[INFO] http-bio-2990-exec-9 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@1413693b sendHeaders()
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@1413693b setCommitted()
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@1413693b recycle() null
[INFO] http-bio-2990-exec-2 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-2 org.apache.coyote.Response@1413693b recycle() null
[INFO] http-bio-2990-exec-2 SocketProcessor.run() - skonczyl sie
With the interesting part being actually this:
 
[INFO] http-bio-2990-exec-10 SocketProcessor.run() - obudzil sie
(...)
[INFO] http-bio-2990-exec-3 SocketProcessor.run() - obudzil sie
[INFO] http-bio-2990-exec-3 org.apache.coyote.Response@59909aec recycle() null
[INFO] http-bio-2990-exec-3 zaraz oddam responsa do puli, patrz nastepny recycle()
[INFO] http-bio-2990-exec-3 org.apache.coyote.Response@59909aec recycle() null
[INFO] http-bio-2990-exec-10 org.apache.coyote.Response@59909aec sendHeaders()
[INFO] http-bio-2990-exec-10 org.apache.coyote.Response@59909aec setCommitted()
There are two threads involved, one of them seems to wake up out of the order and recycle the Response@59909aec, and put it back to the pool ("zaraz oddam responsa do puli" message). The other thread however is still messing with the Response object after recycle. Ok, so now let's take a look at the recycling mechanism in Tomcat:
// org.apache.coyote.Response code from tomcat 7.0.40. Lines 511-527:
public void recycle() {
     
    contentType = null;
    contentLanguage = null;
    locale = DEFAULT_LOCALE;
    characterEncoding = Constants.DEFAULT_CHARACTER_ENCODING;
    charsetSet = false;
    contentLength = -1;
    status = 200;
    message = null;
    commited = false;
    errorException = null;
    headers.clear();
    // update counters
    contentWritten=0;
}
This is quite interesting. Tomcat actually reuses various objects that are needed for the communication with the web browser. In this blog post we will focus on the Response object, but Tomcat actually uses the same recycling pattern for other http handling objects too. Back to the example above: you can see that recycle() method resets most of the local variables of the Response object to some default values. So whenever a Tomcat thread finishes handling the request, it actually calls that method and put the Response object to the pool of ready-to-be-reused Responses. However, there might still be an object in the memory that references that Response object and will manipulate it! And it seems that this was the cause - Atmosphere library is actually wrapping and storing the Response object in its data structures. Otherwise it wouldn't be able to conduct asynchronous operations on the http requests as it should. Could it be that in some dire circumstances the Atmosphere code writes to the Response that Tomcat already collected for re-using?


Completing asynchronous request.


After some investigating I managed to nail down what is causing Tomcat to wake up a thread and recycle a Response. It was completing asynchronous request, which is done by calling asyncContext.complete() from the Servlet 3.0 API. I just needed to know now which thread (and why) was completing asynchronous requests. To achieve that I've rebuild Atmosphere library with additional hack:
//my addition:
response.setHeader("asyncCompleted", Thread.currentThread().getName() + " asyncCompleted\n"
     + Arrays.toString(new Exception("asyncCompleted").getStackTrace())); 
//end of my addition
asyncContext.complete();
and tweaking the Tomcat Response:
     public void recycle() {
         contentType = null;
         contentLanguage = null;
         locale = DEFAULT_LOCALE;
         characterEncoding = Constants.DEFAULT_CHARACTER_ENCODING;
         charsetSet = false;
         contentLength = -1;
         status = 200;
         message = null;
         commited = false;
         errorException = null;
+        String asyncCompleted = headers.getHeader("asyncCompleted");
         headers.clear();
+        if (asyncCompleted != null) { setHeader("asyncCompleted", asyncCompleted); }
  
         // update counters
         contentWritten=0;
     }
This allowed me to check in the debugger who was completing the asyncContext last time. Using the same technique as I used to determine the code that set the committed flag, I just needed to trigger the IllegalStateException once again and dig down to the Response object in the debugger:








This allowed me to see two stacktraces of the same thread:
1) the stacktrace in the moment of calling asyncContext.complete()
2) the stacktrace in the moment of committing the Response 
By manual inspection of these two (and analyzing the difference) I finally arrived at this Atmosphere code:
// org.atmosphere.cpr.AtmosphereResourceImpl: ver 2.2.3 lines 737-782
public void cancel() throws IOException {
    try {
        if (!isCancelled.getAndSet(true)) {
            (...)
            asyncSupport.complete(this);  // <-- this calls eventually
                                          //     asyncContext.complete()
            (...)
            // We must close the underlying WebSocket as well.
            if (AtmosphereResponse.class.isAssignableFrom(response.getClass())) {
                AtmosphereResponse.class.cast(response).close();
                AtmosphereResponse.class.cast(response).destroy();
            }
            (...)
        }
    } finally {
(...)
It seems that Atmosphere 'cancel' implementation first signal that the asynchronous operation is completed (and thus Tomcat recycle the Response with another thread). Later, the Atmosphere calls .close() on the wrapped Response object. Further inspection revealed that .close() implementation leads to changing Response state to committed. Could it be that actually completing the asyncContext should mean that nobody can manipulate the Response object anymore? To test that hypothesis let's use another feature of breakpoints:
Let's allow of the asyncContext.complete() operation and freeze the current thread right in the succeeding statement (line 757 in the above screenshot - note I've checked Suspend: "Thread" option instead of default "All"). This will allow other threads to work in the background, and possibly one of them will collect and recycle() the response object. And in fact - stopping at this breakpoint and resuming after a few seconds has made the IllegalStateException problem nearly 100% replicating. Further more - commenting out the line:
//              AtmosphereResponse.class.cast(response).close();


made the webdriver tests to run stable. Whoa. Could it be that Atmosphere authors didn't know that you can't manipulate Response objects after completing asynchronous operations? Perhaps the problem only manifest itself in the JIRA/Tomcat7 environment - I guess Atmosphere is used occasionally on such a combo. So perhaps nobody have this problem yet, because other containers have different (contention-proof) recycling mechanisms. Or maybe no recycling mechanisms at all. I dunno.

Conclusion.


Fixing the problem should be quite easy now - for example one might modify Atmosphere code and get rid of these unnecessary .close() actions. Another one could raise an issue in the Atmosphere Github repository and wait till authors fix the issue. Another one can wrap the Response objects with own wrapper that will be able to cut any action after the asyncContext.complete() is called. Another one might find yet another creative solution.

Anyway, that's a different story. In this blog post I wanted to focus on analyzing concurrency issues and as we finally got into the root cause of the problem - I think we can finish the story here. I hope you enjoyed it, thanks for reading.

Oh, and by the way: if you are active JIRA Agile user and you do estimating of story points for your issues (or plan to add that step to your development process) you might be interested in checking out our new add-on available on the Atlassian Marketplace: Agile Estimates. It's a tool for collaborative estimating within JIRA Agile directly, and incidentally also the product that made me write this blog post in the first place. :)



Wednesday, March 4, 2015

Implementing custom check-in policy for Visual Studio

Here in Spartez we are currently hard working on developing TFS4JIRA - our great app integrating TFS and JIRA. It is an indispensible tool especially (but not only!) for teams that use JIRA as their issue tracker and store their repository in TFS. They usually want to include JIRA issue key in every check-in message so that this check-in will be linked and shown in JIRA issue view (like on the image below).


The problem appeared that there was no mechanism to ensure that developer included JIRA issue key in his check-in comment. If there is no such key, then obviously such a check-in is not shown in JIRA. We decided to create an open source TFS check-in policy plugin that will enforce developers to include at least one valid JIRA issue key. It's already available on the Visual Studio Gallery.

How custom TFS check-in policy works internally

Custom check-in policy is a plugin to Visual Studio installed by each developer separately, but its configuration is stored globally in TFS Server. To see what it means let's first take a look at the diagram below:


Let's assume the following:
  • Team Lead installed and configured custom policy in his Visual Studio.
  • Developer A installed custom policy plugin in his Visual Studio.
  • Developer B did not install custom policy plugin in his Visual Studio.
In this case the Team Lead and Developer A will have custom policy installed and configured properly(settings are stored in TFS server) but Developer B will not use this custom policy at all (so he will successfully check-in even if his code doesn't meet policy requirements). That's why installing custom policy plugin should be a part of onboarding process for each developer.

Creating custom check-in policy

A good place to start developing custom check-in policy is this tutorial (remember to install Visual Studio SDK before you dive in). It explains the basics well but during development of our custom check-in policy we had to solve some interesting problems that weren't covered there:

  1. Custom check-in policy created for one version of Visual Studio will not work with other versions. That's because the version of assembly Microsoft.TeamFoundation.VersionControl.Client referenced from your plugin has to match version of target Visual Studio. That's why we've created separate packages for VS 2010, 2012 and 2013 that reference the Core project with business logic. Taking into account the fact that for each VS you need a separate VSIX package the structure of our plugin looks as follows:


    Plugin projects are just very thin wrappers for Core. The code looks exactly the same for VS 2010, 2012 and 2013 so we took advantage from Add as Link functionality in Visual Studio to have only one copy of source files.

  2. Initially we configured assembly name for projects 'Plugin for VS 2010', 'Plugin for VS 2012' and 'Plugin for VS 2013' in the following way:
    • 'Plugin for VS 2010' produced an assembly called CheckinPolicyVS2010.dll,
    • 'Plugin for VS 2012' produced an assembly called CheckinPolicyVS2012.dll,
    • 'Plugin for VS 2013' produced an assembly called CheckinPolicyVS2013.dll.

    Now let's assume that the Team Leader has installed and configured our custom check-in policy using Visual Studio 2010. The problem is that TFS stores assembly name in which custom check-in policy is defined (in this case CheckinPolicyVS2010.dll) in central TFS server. So the custom policy will work for team members using VS 2010 but not for these using VS 2012 and VS 2013 (Because they don't have CheckinPolicyVS2010.dll). The solution is to set the same assembly name for all three mentioned projects.

  3. Our custom policy connects with JIRA so we need to store somewhere connection credentials. Of course central TFS server is not a good place for such data so we decided to use Data Protection API (DPAPI) to encrypt them and store locally for each developer. We used the great EasySec nuget package to work in DPAPI. It gives you a very convinient API to encrypt and decrypt data:
    var encryptor = new EasySec.Encryption.DPAPIEncryptor();
    var stringToEncrypt = "stringToEncrypt";
     
    var encryptedString = encryptor.Encrypt(stringToEncrypt);
    var decryptedString = encryptor.Decrypt(encryptedString); 

Source code

You can find the source code for our plugin on Bitbucket. Feel free to comment and contribute.

Monday, July 28, 2014

Spartez sunny summer releases: Quick Notes, Agile Cards and more.


Although the holiday season is taking a toll on our team (sunny skies outside and the proximity of the  beach seem to have something to do with this...) we're always cooking comething in our Spartez lab.


Probably deserving the highest praise in this regard are Quick Notes, formerly known as “kwik!”. Having recently gone a thorough makeover in terms of general appearance and functionalities, followed by a series of new releases with significant bugfixes, this clever little plugin is now improved and perfect for taking notes while browsing Confluence pages, just as you would use Post-It for annotating the content of a physical piece of paper.


Quick Notes can be created in a number of ways, whichever the user finds most convenient. They support Wiki Markup, so that their content will always be clear and look professional.






If you want to notify someone using your Quick Note, just type'@' followed by their username into the note's content and this user will receive an automatic e-mail notification from you.


If a Quick Note overlaps a piece of the text that you are reading, just click on the 'x' button in the top right corner of the note to dismiss it. To reopen the note click on the highlighted text which it refers to.


Agile Cards have also made it to the new release delivery box. The series of recent usability improvements and bugfixes includes a few changes regarding the card templates. The cards’ print settings have been moved to templates; print templates can now be selected when printing from JIRA Agile boards. Also, templates are no longer linked to individual projects and are now sorted alphabetically in the "Select template" screen.

Coming up soon is a new release of TFS4JIRA with lots of new synchronization features.
In the meantime we’re slipping on our geeky flip-flops on, grabbing a handful of wifi and making our way over to the beach to continue summer’s developments… See you in the sun!

 

Thursday, May 15, 2014

It's a world of things out there

Our asset management standalone system a.k.a. Ephor has moved to the next level with its very own, dedicated minisite:


From now on ephor.spartez.com will be our all-Ephor-related information hub, frequently feeded with practical tips and hints, with a bit of fun and a tinge of Ephor-pink to spice things up. 

We've also taken a brave headfirst dive into asset management and launched an Ephor-linked Facebook site entitled "World of Things", where we'll be exploring a whole new view on asset management, stretching from packing a suitcase for a business trip or a survival camp (it's an asset management challenge, take it from us!) to the hazards of maintaining outdated office equipment.

Drop by and grab some asset management themed miscellanea!

JIRA Service Desk integrable Ephor4JIRA


Now that the our Ephor standalone asset management system is out and about, we’ve  recently launched a Service Desk-integrable Ephor4JIRA app


While the standalone version of Ephor is integrable with Atlassian JIRA - so that you can create issues such as “Replace ink cartridge in printer” or “Buy 20 more chairs like this one for conference room” straight from the concerned item view in Ephor and immediately see them pop up in JIRA - the Ephor4JIRA app is basically a second incarnation of the same software that goes one step further. It’s a JIRA plugin, which besides JIRA itself, also integrates directly with JIRA Service Desk. Plus, user management in Ephor4JIRA is (obviously) transparently integrated with JIRA's. 

Here is a small preview:




So if you already have JIRA and want to tightly integrate it with your asset management - you should probably go for Ephor4JIRA

If you don't have JIRA, or if you want to have a separate asset management server that is not hosted in JIRA (but can talk to JIRA, delegating asset workflows to it) - use Ephor standalone.

Both of these solutions are available for a free 30-day evaluation period (or, if you have no more than 100 assets to manage, they’re free altogether!) and offer what, from our own experience, we feel is most important for such a system to actually be useful to any serious asset-management enthusiast.