Who We Are
DISCO is a legal tech software company. Our objective is to own the legal tech market and become the leader in legal as Salesforce has done in the sales technology space. Given the massive growth of data over the last 20 years, poorly built legal technology products have severely decreased lawyers ability to practice the law.
Our fundamental mission includes building a unified technology platform for the practice of law, composed of 7+ products released over the next 10 years. Great technology can solve problems of scale in data, in laws, and in business operations that have distracted lawyers from doing what they went to law school to do. DISCO is fixing the law by automating the parts of the practice that can be automated so that great lawyers can focus on tasks that really do require human legal judgment.
To date, we have substantially disrupted the legal tech market with a lawyer inspired consumer-grade interface and a cloud-enabled technology platform that offers unprecedented performance and cost savings. Thoughtful product planning and product design are core tenants in our “product first” business strategy and culture.
We intend to build a multi-billion dollar business and think you should come along for the ride because:
- DISCO is a very successful company more than doubling revenue every year for the last 5 years. We were the first movers to a cloud-based platform that has caused mass disruption within our market.
- Our CEO is a true market visionary. He graduated with a computer science degree at the age of 15 and followed with a JD from Harvard Law School at the age of 19. His unparalleled insights into the fundamental issues in legal and the potential of technology and artificial intelligence to change our market at its core provide the guiding light for DISCO’s long-term strategy.
- We believe that product delivery professionals including product managers, product designers and engineers differ from one another by at least a factor of 10. At DISCO, we only hire the top 1%, pay them well, and with equity, everyone has effectively been getting a raise each and every day. Given our product first mindset, product professionals are very much stars of the show. Our logo, the circle and square, represents the best lawyers and the best product professionals in the world.
- We measure product delivery velocity by dollars of revenue per line of code, vs simply lines of code. This drives a very thoughtful and deliberate product design and development process that ensures we’re going to make money when we ship products. We hire many more product managers and designers per engineer than most companies to ensure that our engineers have a disambiguated product intent when they are building.
- As a rule, we don’t commit to external product delivery dates as we believe that unnecessarily constrains our creativity from both a product and technology point of view.
- At DISCO we only hire very good humans that we would entrust with our children. Don’t worry we won’t actually drop our kids off at your place, but we need to know that they’d be in good hands if we did. Respect isn’t earned at DISCO it is assumed. Good humans inherently treat everyone respectfully. This is a very important concept at DISCO.
- Given the talent level at DISCO (only top 1%), the cutting-edge cloud-based technology stack, and thoughtful and novel product and design approach, you’ll find yourself learning at a rate you’ve not likely experienced in your career. Given that we only hire professionals that are passionate about their craft, you’ll truly enjoy building a great software product and get in the best “career shape” of your life.
- Over the next 4 years, we’ll be growing our product delivery organization from 85 professional to 500+. There will be incredible growth opportunities along the way.
- We use the “2 Pizza Team” organization design where small autonomous teams own a piece of a product or platform and ship software at rates comparable to a very lean and scrappy startup. We achieve consistency across these teams in the areas of design, product-wide use cases and technical concerns through a strategically focused set of overlay functions.
- Finally, while we’re an incredibly fast growing organization, as a rule, we do not work crazy long hours. We believe in continuous product delivery, continuous product planning and design, continuous regular sleep schedules, continuous regular vacation, and continuous fun if you’re passionate about your craft.
If you want to win while getting better than you’ve ever been, come to DISCO.
The Analytics Platform Architect is important because:
Great products need great architecture. As an analytics platform architect, you will help design and build the platform to enable business intelligence that will drive our business. We are hiring people that approach design from a systems perspective and have the aspirational goal of everything ‘well-crafted.’ A great candidate can deliver real customer value while pursuing ‘high marks’ on these architecture quality attributes:
Availability, Scalability, Interoperability, Modifiability, Performance, Security, Testability.
A day in your life as an Analytics Platform Architect at CS Disco:
Might include designing/building enterprise-wide solutions such as:
- Enterprise Data Warehouse platform integrating facts from everything from product subsystem event streams to back-office CRM tools.
- Business Intelligence and Data Analytic capabilities to drive low-latency Key Performance Indicator generation.
- Event Bus and Event Sourcing capabilities that provide business and engineering leverage and efficiencies.
- Highly scalable and crazy performant data pipelines.
- Transactional or eventually consistent stores that provide well-encapsulated domain object semantics.
- Orchestrated scale-out data pipelines that can leverage serverless and containerized compute that balances cost, latency, and duration.
- Algorithmically intensive data engines that operate on streaming, large, or multi-tenant datasets.
Successful Analytics Platform Architect candidates:
- Must design and communicate external and internal architectural perspectives of well-encapsulated systems (e.g. Service Oriented Architecture, Docker-based services, micro-services) using patterns and tools such as architecture/design patterns and sequence diagrams.
- Must have experience translating requirements into a robust, scalable platform data architecture to support dynamic workloads and world-class analytics.
- Must understand architectural tradeoffs with regard to data latencies. Must have deep experience with logical and physical modeling including techniques to deal with data dimensionality.
- Must have experience with Business Intelligence and Data Infrastructure/Analytics technologies such as: Redshift, Snowflake, Vertica, Teradata, Oracle RAC, MapR, PostgreSQL, Tableau, Qlik, Looker. Experience with predictive capabilities a strong plus.
- Must have experience with ‘Big Data’ technologies such as: columnar databases, Kafka, AWS EMR, Apache Spark, DataFlow or pipeline systems, ElasticSearch, NoSql stores.
- Must have experience applying data governance principles to implement processes, controls, and standards for managing data across an enterprise, including metric definition lifecycle management.
- Must have experience with design, implementation, and operation of data intensive, distributed systems. (The book, Designing Data Intensive Applications, is a good reference.)
- Should have experience using Continuous Integration and Continuous Deployment (CI/CD) with an emphasis on a well-maintained testing pyramid.
- Should have API and data model design or implementation experience, including how to scale out, make highly available, or map to storage systems.
- Should have experience with multiple software stacks, have opinions and preferences, and not be married to a specific stack.
- Should have experience designing and operating software in a cloud provider such as AWS or GCP.
- Should know how to identify, select, and extend 3rd party components (commercial or open source) that provide operational leverage but does not constrain our product and engineering creativity.
- Might know about algorithm development for intensive pipeline processing systems.
- Might have experience designing, modifying, and operating multi-tenant systems.
- Might understand how to design and develop from a security perspective.
Our Technology Stack:
Cloud Provider - AWS: EC2, Lambda, Aurora, Redshift, DynamoDB, ECS, SQS, SNS, Kinesis, S3, CloudFront, CloudFormation, SageMaker, KMS, CodePipeline, etc.
DSL-based Search: multiple large scale Elasticsearch Clusters searched using our Disco Query Language (DQL).
Event Bus: Kafka and Schema Registry
3rd Party Vendors: Redis, Auth0 for Cloud Identity Federation (SSO, SAML, etc).
AI: MinHash, FastText, Word2Vec, Convolution Neural Nets, Algorithmia (Lambda with GPUs) for training, PyTorch, Recurrent Neural Networks, Latent Dirichlet Allocation for Topic Modeling, etc.
Deployment: Terraform, Docker (via ECS), Consul for: App Config, Service Discovery, Shared Secrets.
Visibility: ELK Stack for logging, Datadog, New Relic, Sentry.io
Transport Mechanisms: Protobuf, Avro, HTTP Rest/JSON
CI/CD: Jenkins, CodePipeline, GitHub, Artifactory