GSOC 2019: Coding Week 3

Another week of GSOC 2019 has absolutely flown by and although I was down with an illness for much of the week, I was still able to get some work done.

Both pages for the password reset user interface are 90% done in terms of look and feel, but they had not been linked to the password reset implementation mechanism in the java back end. That is what I set out to do and achieve this week. As of now, once a user hits the password reset button on the password reset page, a post request is sent via rest to the server o verify the credentials entered by the user. If indeed the username or email entered matched an existing user in the database, a password reset email is sent to that user with instructions on how to reset their email.

Screen Shot 2019-06-15 at 11.07.03 PM

As seen in the image above, I have a user who’s username is gsoc19. After entering the username and hitting the “Send Password Reset Link”, it sends an email to the said user’s email address, which can be seen below:

Screen Shot 2019-06-15 at 11.01.43 PM

On clicking the “Reset your password” button, it would then take you to the page below:

Screen Shot 2019-06-15 at 11.07.35 PM

Blocker: This might sound a little trivial but I can’t seem to load up the OpenMRS logo which is supposed to be at the top of the password reset email. I placed the logo image in the resources folder of openmrs-core, and tried to reference that image from the email template which is an html file, using all the tricks I know of, but still I could not get it to work. Any insight on this would be much appreciated.


GSOC 2019: Coding Period Week 2

The second week of the coding phase for the Google Summer of Code 2019 session just wrapped up, and so far so good. The user interface for the password reset feature for openmrs is slowly taking shape. My initial plan for the OWA was that it would comprise just of two pages:

  • The initial page which comes up when a password reset button is pressed. This page would ask the user to enter his email or username, after which an email would be sent to the said email containing instructions on how to reset their password.
  • The actual password reset page, which would be accessed through a link included in the password reset email. This page would provide a means for the user to input a new password and confirm that password before the password reset feature is finalized.

Screenshots of both pages of the OWA can be found below:

Screen Shot 2019-06-07 at 2.53.37 PM

Screen Shot 2019-06-07 at 2.53.49 PM.png

Further edits are being carried out to ensure the design of both pages is directly consistent with the OpenMRS style guide.

Blockers: I intended using a reCaptcha on the first page, but while setting it up, I noticed I’d need to set the specific domain name through which my app is going to be accessed, so reCaptcha would be loaded only through that domain name or any of its subdomains. The domain name for my local OWA is localhost:8080, but I can’t use this domain name as I am not sure every other person using this app would use the same exact domain. Still trying to figure that out.

Two weeks down, two and a half months to go. Lets Roll!

GSOC 2019 Coding Period: Week 1

The Google Summer of Code 2019 coding period kicked off on Monday 27th of May 2019, which was a couple of days ago. Accordingly I begun working on my project, UI for Password reset, and using the OpenMRS Yeoman Generaor, I created a scaffolded OWA which would serve as the basis of my project, using ReactJs as the main Library for this OWA. I also created a github repository containing the code for the scaffolded OpenWeb App. Subsequent commits would be made using this repository.

Also I discussed with my mentor following the production of more detailed mockups(more detailed than the ones in my GSOC application) for what the final user interface should look like, so that we follow a specific guide through out the user interface design and implementation process, knowing that we are working towards a specific goal. This is in progress at the moment.

First week down, entire summer to go. Super excited

GSOC 2019: End of Community Bonding

It was an absolute delight for me when I was notified that I’d be taking part in the Google Summer of Code 2019 session with OpenMRS. Having been a member of the OpenMRS community since 2017, I have experienced first hand the beauty of being part of such a wonderful organization, an organization wherein help is always just a click away, where everybody is treated with respect and coding is fun.

My project for the GSOC 2019 session is titled UI for Password Reset. This project would entail building an appropriate user interface for the password reset functionality that was introduced in GSOC 2018, as well as making some modifications to that project. The expected finished product of my project is a complete, effective and responsive Password Reset User Interface.

Over the course of the Community Bonding period, I was in constant contact with my mentors to figure out how to implement this project. Although I am proficient in Angular, my mentor advised me to study React JS as that would likely be the technology to be used for the user interface. I picked up a few courses in React Js on Udemy, and on the ReactJs official website. Although I wont say I am an expert in React as of now, I’d say I am comfortable working with it and would get better eventually.

I am enthusiastic and look forward to these three months of coding and the challenges they shall come with. LETS GET ROLLING!!!

GSOC Second Coding Phase: Week 2

Its been two weeks already since the first evaluation for gsoc 2018 which I was elated to pass, due to the large amount of effort I invested. I was able to implement the dictation using the CMU Sphinx library,  Pocketsphinx which I implemented as external javascript classes as explained in previous blog posts.

Over the past few weeks, I planned on getting the dictation running on reports created with report templates as well, as I had initially implemented the voice just on the free text report. My goal for the second coding phase was to get dictation running on report templates, and begin work on my owa, such that I would have a functional OWA by the second evaluation. But given that the work on creating reports using report templates done last year by Larry was not merged, I had to do that bit myself, before incorporating the voice dictation. Larry provided me his work from last year, but simply merging it with my work was not possible because it seems a lot has changed in Librehealth radiology since then, so some of the code he wrote last which was functional, threw some really puzzling errors when I tried merging with my work, so I had study his work in depth, understand it and understand where the problems were coming from. Thus I made a few changes, and although it took longer than expected, I got it to work.

As of now, dictation is possible both with report templates and free text templates. I commenced work on the owa yesterday and will put in extra hours this week to meet my deadline of the second evaluation. Then after the evaluation, I’ll work on saving the reports into the new spring data architecture of Librehealth Radiology


Its been a roller coaster ride in gsoc so far, lots of highs and lows. It was a really busy week for me because as i pointed out on my last blog post, I had continuous assessment tests this week, so I was not able to get much done. However, concerning the Challenges I pointed out in my last blog post, not able to import the javascript classes I had created to handle the voice dictation, into the appropriate jsp page, I was able to solve that.So as of now, when you click the “Start Dictation” button, the server requests access to your microphone. The dictation is then done but I am still trying to figure out how to write the text output into the tinymce editor in real time As of now, once the speaker is talking, the output is written to the terminal instead.  So that is what I am working on now. Once I am done with that, I will commence the language model training so that the dictation doesn’t recognize just words one to ten as it does now


My main blockers so far in this project have come as a result of the fact that working with external javascript files in openmrs is a quite tricky. I find myself getting stuck with trivial implementations such as referencing one javascript class from another. I was able to resolve that however, even though it slowed me down quite a bit. Right now my main blocker is getting the speech input to be displayed in real time in the tinymce text editor as the user speaks, rather than in the terminal. That is what i currently am working on, which I hope to resolve in the coming days and commence with the language model training for the actual radiology dictation and not just the one to ten vocabulary the system currently posseses. Alongside that, I would begin with work on some of the basic components for the OWA.


Over the past 2 weeks I’ve been working on the voice integration part of my project, I was initially confident that I would be able to achieve this goal using the java library of CMU Sphinx, namely sphinx4, I had actually done the integration and it ran successfully on the command line. That is I could dictate words and print them out on the command line using println()

But then I hit a brick wall trying to transfer that output into real time input in the tinymce editor for the report. I couldn’t find a way for the java api to communicate in real time with the server, such that as a person is speaking, the words are written into the tinymce editor, despite lots of research on Ajax, and other technologies.

I then found a way to use the C library of cmu Sphinx, namely pocketsphinx, by converting it into JavaScript code that could run directly on the browser. This was done using . This generated a pocketsphinx.js file which I could use directly in the jsp files, without having to interact with the radiology back end.

This was way more convenient to work with. I have got the voice integration running using this method, although there are some minor fixes I need to make before it’s actually functional.


The javascript code that does the actual voice transcription is in a class called main.js, in the omod resources folder of lh-radiology module, alongside some other javascript classes which this main class depends on. Now my problem is I can’t seem to find a way to include any of those javascript classes into the radiologyReportForm.jsp page, where they are to carry out their functions. My problem is similar to what is discussed on this OpenMRS Talk Thread. For a smaller grammer implementation(say if we want the voice dictation system to recognize just numbers 1 to 10), it is possible to include the necessary javascript code directly into the jsp page. But for larger grammers like what is intended for this project, that would be impossible to do.

Another challenge I have is that I may be a little out of work this week due to the fact that I begin with my Continuous Assessment tests on Wednesday 30 May.