Skip to content

Instantly share code, notes, and snippets.

@thedeveloperr
Last active January 9, 2020 14:10
Show Gist options
  • Select an option

  • Save thedeveloperr/4c2630545b4c052deb45a7d86077e75c to your computer and use it in GitHub Desktop.

Select an option

Save thedeveloperr/4c2630545b4c052deb45a7d86077e75c to your computer and use it in GitHub Desktop.
This Gist contains work done for Zulip Open Source Project's Team chat software during Google summer of code 2019

Work Product Document for Zulip open source project as GSoC 2019 student developer.

Zulip server - powerful open source team chat

Areas worked in: Search, Message View, group PMs, bots, production

  • PR: search: Add is:public-stream to search entire history of public streams. #12999

    • Status: Open and Reviewed once but spec got updated so making changes according to new specs. Original spec was complete and tested. New spec. is also close to completion.
    • Solves Issue #8859.Right in zulip past message history can be searched in a single stream at a time. But This PR provides user ability to search all of the organisation's streams (both subscribed and unsubscribed) history at once, including messages from before user joint the organisation.
    • In zulip there are three streams, public, private, private with shared history. Initially spec. was to only allow search in public streams. I completed the work along with tests. Initially the syntax to search was is:public-stream. After review Tim suggested to expand it to public and private subscribed stream too. So now syntax is:
      • streams:public fetches all the history of public streams
      • streams:subscribed fetches all the history of subscribed streams public and private streams (which allows shared history) both Results for streams:public is same as is:public-stream The only difference is syntax. Work for streams:public is complete. Work for streams:subscribed remains. This feature I think completes a feature that is needed in a team chat software. Ablity to search whole orgnaisation message history will allow users to find information that have been shared already in past which might had been hidden in a unsubscibed stream within organisation.
  • PR: filter: Add more narrow filters that cannot mark messages as read. #12849

    • Status: Open and waiting for review.
    • Follow up based on comment on PR #12747. Adds more narrows where message is not marked as read.
    • As a product decision, Team decided to expand more search views/narrows where message should not be marked as read. Eg. while searching for messages which has link using has:link operator or search messages where you were mentioned is:mentioned using operator. Most of time went into exhaustive tests so as to ensure everything is working. This feature have many possible cases. Exhaustive testing ensures that no issue crop up now and in future.
  • PR: narrow: Fix to show last message in narrow when narrow allows.#12847

    • Status: Open and waiting for review (The code paths are complicated and need in depth review).
    • Solves a feature that was supposed to properly work in last PR. In PR #12747 there was a feature not to show unread message first but show last message. All logic and functions were implemented and called but was UI was not updating. In initial manual testing the situation of showing unread message first was tested but showing last message was not tested when unread message were present. Also there was no automated test to check this situation so this problem gone unnoticed. . This PR basically fixes it and adds to test to ensure this kind of issue is caught in future. Lesson: Exhaustive automated test will ensure features working in future and present.
  • PR: search: Don't mark messages as read in search narrow. #12747

    • Status: Merged and Closed.
    • Solves issue #12556. Basic thing it solved was to not mark new unread messages as read when they appear in search query for an older conversation.
    • This was a pretty high priority and highly requested feature by community, both Tim and Yago were excited to see it happen. The main work was finding relevant places to make changes in existing codebase. Zulip's frontend is old and big, some places the code is too complicated so it was a time consuming process to read the existing code and figure out where to make changes. Every new feature required new code files and lines to be read. The actual logic was simple. After some experience, writing unit tests was also not that difficult.
    • Merged commits: 6ec40cf, 648a60
  • PR: Fix searching and viewing Group PM and refactor of by_pm_with #12661

    • Status: Merged and Closed.
    • Solves issue #12601 and #12593
    • Intially #12593 seemed to be a backend bug but while Investigating bug lead to two completely different bugs. One was frontend bug (source was very hard to find) another as expected a backend one. Investigation details
    • The fix for bug and another refactor work #12601 overlapped so I solved both in single PR in three commits. The refactor experience took me to one of core data model of zulip the Recipient Model. I got chance to understand how db schema and sqlalchemy db queries works. I also learnt how to write clean code while doing the refactor.
    • Merged Commits: 48786, c971576, 00cdba2, 45f87ff
  • PR: WIP Administrator Bot #12589

    • Status: Open and Work in progress due to some doubts in specs.
    • Solves issue #12424
    • A refactor prep. commits was merged db3d81613bf0530db
    • Ability to create bot with admin privileges by an admin is done (and checks to prevent non admin to do so). Also updated tests to check some side cases mentioned in comment. The only stuff that is missing is to add constraints what admin bot can't do but only humans can do. I first manually audited the whole codebase to find existing admin capabilities and human only capabilities ( see conversation ) Tim and Rishi discussed what to add and what not to. I started working on the given specs but later I switched to another work due to some doubts in specification of admin bots. The PR still is stuck on that stage because I am still waiting for the doubts to be resolved.
    • Merged Commits: db3d816, Unmerged: Unmerged Commits: https://github.com/zulip/zulip/pull/12589/commits
  • PR: Cleaned and Rebased version of PR #10102 Multiple api keys #12511

    • Status: Open and Waiting for review
    • Solves merge conflicts, rebasing and cleaning of PR #10102
    • PR #10102 was started by my mentor, Yago during his GSoC 2018 work. I had to solve merge conflicts and rebase it with latest master. This PR basically allows user to have mulitple api keys. As of time of writing, a Zulip user have a single regenratable API key. In original PR cache deletion of API key was not there but a cache deletion logic for single key had been added in codebase by Tim Abbott I extended that cache deletion of api key to work with mulitple api key model ( ee https://github.com/zulip/zulip/pull/12511/files#r291530222) In this process I learnt more about cache codebase of zulip. This PR improved my git skills and I learnt alot about cherry picking, searching old commits, diffs practically.
    • In the second work period, merge conflicts again occured with my PR 12511. I had to rebase and fix new set of failing test. This experience gave me insights how a fast moving project can easily break previous unmerged work.
    • Unmerged Commits: https://github.com/zulip/zulip/pull/12511/commits
  • PR: bots: Bots can post to announcement-only streams if their owner can. #12383

    • Status: Merged and Closed
    • Solves issue #12310
    • Most of time went into writing and correcting Unit test because of buggy test endpoint. Intially wrote test using API endpoint that were used by similar tests already in the main codebase. But those pre existing code were buggy. I fixed those tests in commit d024c30cd8baf. Later wrote correct tests for issue #12383. This experience taught me valid pre existing codebase or tests can have bug in them. Always lookout for code around you and investigate any weird exiting code even if it's not directly related to your current work. Try to follow principle of "Leave the code better than you found it"
    • Merged Commits: d60f6c9, a98447b
  • PR: Batch read flag requests to at most 1k message IDs #12264

    • Status: Merged and Closed
    • Solves issue #11956 of production server holding hundreds of locks at the same time. Frontend UIs may attempt to mark 10,000s of messages and query for flagging message in backend could run for extremely long periods of time (e.g. 72478s in one instance!).
    • Solution implemented is to make webapp batch the message flagging requests to at most ~1K message IDs at a time. Most of the work went into writing frontend unit tests to ensure that feature is working correctly. I figured out a way to intercept the request in javascript and ensured that only list of id of length equal to batch size is being sent as request payload. I mocked server response and mark messages in UI and ensure already flagged message ids are not being sent in the next batch. This PR gave me experence to simulate conditions in test codes.
    • Merged Commits: 1902f52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment