Hacking Django websites: clickjacking

Part 2 of the “Hacking Django” series: part 1, part 3

Clickjacking is an attack where one of your logged-in user visits a malicious website, and that website tricks the user into interacting with your website via an iframe.

As an example, see the green “create pull request” button which will create a Pull Request on GitHub as the logged in user:

Let’s say the malicious website has some nefarious React:

import React from 'react';

export default function (props) {
  const [position, setPosition] = React.useState({ clientX: 0, clientY: 0 });
  const updatePosition = event => {
    const { pageX, pageY, clientX, clientY } = event;
    setPosition({ clientX, clientY,});

  React.useEffect(() => {
    document.addEventListener("mousemove", updatePosition, false);
    document.addEventListener("mouseenter", updatePosition, false);
    return () => {
      document.removeEventListener("mousemove", updatePosition);
      document.removeEventListener("mouseenter", updatePosition);
  }, []);

  return (
        style={{zIndex: 100, position: 'absolute', top: position.clientY-200, left: position.clientX-650}}
      <div>Some website content</div>

When one of the logged in users accesses the malicious website this happens: the iframe follows the mouse so that the button is on the mouse. When the user clicks, they click on the iframe.

Now that is very obvious – the user can see the iframe, but that’s one change away: style={{display: 'none', ...}}. For the sake of the demo I used style={{opacity: '0.1', ...}}(otherwise you would see nothing interesting):

Clickjack prevention middleware

The solution is simple: to set iframe embed policy for your website: adding django.middleware.clickjacking.XFrameOptionsMiddleware to settings MIDDLEWARE and X_FRAME_OPTIONS = 'SAMEORIGIN' will result in X-Frame-Options header with value of SAMEORIGIN, and modern browsers will then prevent your website from being embedded on other websites.

Continue reading: part 1, part 3

Does your website have security vulnerabilities?

Over time it’s easy for security vulnerabilities and tech debt to slip into your codebase. I can check clickjacking vulnerability and many others for for free you at https://django.doctor. I’m a Django code improvement bot:

If you would prefer security holes not make it into your codebase, I also review pull requests:


{% static %} handles more than you think

You probably agree using {% static %} is not a controversial:

Credit: Django Doctor

If you’re unfamiliar: {% static %} returns the path the browser can use to request the file. At it’s simplest, that would return a path that looks up the file on your local file system. That’s fine for local dev but in prod we will likely use third-party libraries such as whitenoise or django-storages to improve performance of the production web server.

There are reasons other than “because S3” for why we use {% static %}

File revving

Whitenoise has a storage backend STATICFILES_STORAGE that performs file revving for cache busting. As a result the file path rendered in the template is renamed: script.js may be renamed to script-23ewd.js. The STATICFILES_STORAGE generates a hash from the file’s contents and renames the file and generates a manifest file that looks like:

{ 'scripts.js': 'scripts-23ewd.js'}

This is called revving. {% static ‘script.js’ %} renders 'script-23ewd.js' in the template because 'script.js' is looked up in the manifest, resulting in 'script-23ewd.js' being resolved.

This is for cache busting: when the file content change the file name changes too, so a revved file can have cache headers set to cache forever in the CDN or browser. On the next deployment if the file contents changes so too does the file name being requested and thus the most up to date file is retrieved by the browser.

So {% static %} abstracts away cache-busting mechanisms so we can focus on templatey things in the template instead.

Websites can move house

Given enough time a website can change where it’s served from:

  • From subdomain to path: abc.example.com to example.com/abc/ and xyz.example.com to example.com/xyz/. This has happened in a project I was in for SEO reasons.
  • Merging multiple repositories into one. This happened to a few projects I was involved with to make development more efficient.

At this point it would be unclear which app’s files are served from /static/. Yes a developer can find a replace all of /static/ with /abc/static/ and /abc/static/. But if {% static .. %} was used they would not need to.

So {% static %} prevents Javascript and CSS from breaking because where the website is served from changed.

Serving from S3

Django suggest not serving static files from the web server in production. One way to do that is to serve the files from S3. If STATICFILES_STORAGE serves from S3, then the static domain will likely be different from webserver domain unless some configuration in Cloudfront ensures otherwise.

So in that case using {% static ‘script.js’ %} ultimately renders https://s3-example-url.com/script.js instead of /static/script.js.

So {% static %} handles changes in the domain the files are served from.

Do you {% static %} everywhere you should?

You can check if your codebase at django.doctor. It check missing {% static %} and other Django code smells:

If you want to avoid code smells getting into your codebase in the first place install the GitHub PR bot to reduce dev effort and improve your code.

Alexa audio effects: change Alexa’s voice

Ever wished Alexa sounded like she was an old-timey radio? Sports stadium announcer Alexa? Alexa behind a door? Alexa Browser Client has you covered.


Run the demo then go to

Why? Because it’s cool. But also to demonstrate the advantages of running Alexa in the browser: we have full control over the audio received from Alexa Voice Service. Adding audio manipulation takes not too many lines of code.

Old-timey radio Alexa

Sports stadium Alexa

Alexa Brower Client – Talk to Alexa from your desktop, phone, or tablet browser.

I made a pluggable Django app that allows you to talk to Alexa through your browser:

  1. Add ​​​​​alexa_browser_client to your INSTALLED_APPS
  2. Run the Django app ./manage.py runserver
  3. Go to http://localhost:8000/alexa-browser-client/
  4. Say something interesting to Alexa e.g., “Alexa, What time is it?”



And hear Alexa’s answer through your browser,

Buy why?

  1. It’s like the Amazon Echo, but web developers can build a rich web app around it – and that web app can be used on your Desktop, Tablet, Smartphone, or Smart Watch.
  2. You can change the wakeword to anything you want. e.g., “Jarvis“.
  3. You can be 100% certain that Amazon, CIA, or “Russian Hackers” are not always listening: the Alexa code does not fire up until the “Alexa” wakeword is issued.
  4. Voice control is an important technology, and will only become more relevant. 11 Million households now have an Alexa device, 40% of adults now use voice search once per dayCortana now has 133 million monthly users. If you’ve not started yet, it’s worth getting into voice technology to stay relevant.

So Alexa Browser App is a tool for developers to build voice commands into their web apps. The only limit is yourself.

Next Steps

I will be using Alexa Browser Client to build a rich web app for controlling my smart home devices. I will put cheap Android tablets in each room, and the web-app will be loaded on each tablet.

Has someone knocked on the door? “Alexa, ask the house to video chat with the front door”

Want to view your Google Calendar? “Alexa, ask the house to show my calendar”.

Want to view the traffic situation in Google Maps? “Alexa, ask the house to show my commute”.

80% of being married is shouting “What did you say?” to your spouse in another room. No more: “Alexa, ask to the house to video chat with the kitchen”.

Note in the above examples you would need to create a custom Alexa Skill that reacts to “the house” and routes the command to your smart home controller app.


Alexa Browser Client has great test coverage, highest Codeclimate score, and Gemnasium integration – but the library does not yet support all the Alexa Voice Service features. Feel free to make a Pull Request to add features. As a community, we can make a fully-featured customizable open-source Echo alternative in the browser.

When four equals five

Consider a form that allows users to search for holidays. It is expected behaviour that the form remembers previous searches –  so if a user previously searched for “4 guests”, the form should pre-populate with “4 guests” the next time the page loads.

This is a simple enough idea, but an odd bug was raised for this component. Ten points to Gryffindor for spotting it before knowing the symptoms: click to see the code.

You read the code code, yes? And you spotted the bug? No? Either way a summary:

1) The components are split into ‘container’ and ‘presentational’. Here is why.
2) createContainer returns a component instance that had state from a flux store passed to it as props.
3) If state.enquiryForm.numberOfGuests is falsey then default to 1.
4) We have some tests that confirm logic is as expected (1 for falsey, 5 for otherwise).

The unit tests passed, and we were happy when we manually tested the component in the browser — the select box ‘remembered’ the value. Great. Deploy!

So why was there a bug? The report was “the number of guests select box adds 1!”

After checking the component and the code we concluded ‘nope’ — The component did not add one to the value — when we selected “one guest”, on reload we got “one guest” in the form. When we selected “five guests” we got “five guests” on reload. When we select “four guests” in then we get “five guests” on reload. Wait, what?

Investigating the bug

Yes, the bug was reproducible . Great. Below is a table showing ‘the value we selected in the form’ against ‘the value pre-populated on refresh’:

Screen Shot 2016-05-22 at 22.40.53
Every even number adds one to itself!? Okay…
Go back to the code and see if you can see the bug. No?

Bitwise OR, Logical OR

There’s a big difference between | and ||. Due to a simple typo a bitwise OR was used in lieu of a logical OR. The outcome of the missing pipe was this unexpected behavior.

A bitwise OR performs a logical OR on each bit of in the numeric value — so five will be converted to 0101, and one will be converted to 0001, and each bit will be OR’d — if either of the bits are 1 then use 1 in the return value’s position:


But when we use an even number such as four or one, we get:


and again with six or one:


but to labour the point with seven and one:


Lessons learned

‘Write more unit tests’ is not the answer to this problem. After all, unit tests were written and the bug still got through. We could write tests specifically targeting ‘bitwise OR’ bugs, but this sets precedence against all similar scenarios. This would take an inordinate amount of time. Moreover, unit tests are for confirming logic, not detecting bugs. This type of bug can be more effectively detected using other means.

This bug was missed by three developers during code review. Does that indicate a human problem with the review process? I think not. Sometimes humans see what they expect to see. With this in mind, as put by W Edwards Deming: 100% inspection is only 100% effective if the inspector detects 100% of defects 100% of the time…which is impossible due to human nature. After all, if humans were that precise there wouldn’t be any defects made in the first place.

We can make ourselves more precise though by using static analysis tools like eslint. We turned on no bitwise-operators rule and it shows a helpful warning message. It should not interfere with the day to day tasks  of the project— if an e-commerce site sees itself manually wrangling bits then it’s probably doing something it shouldn’t.

Bloom filter as a service: validate username availability without asking the server.

Consider a spreadsheet user management interface that allows creating and updating users in web spreadsheet. It must validate that usernames have not been used before:

Screenshot from 2015-09-02 22:39:20

An obvious solution is to ask an endpoint if the username has already been taken. However, the admin user will experience a small lag due to the round trip from client to server: there will be at least a tenth of a second lag before the user is informed the username cannot be used. The impact is compounded when the admin user is managing many users in the spreadsheet interface.

This post will explain how to avoid that round trip completely and thereby eliminating the delay between when the user enters text and when they are prompted it’s invalid, which improves user experience.

We want the browser to have knowledge of all taken usernames without knowing what those usernames are, and without needing to call the server.

We obviously won’t give the browser a list of usernames. Words like “hacker” and “brute force attack” spring to mind. Some of those usernames will have really weak passwords! Plus there are privacy concerns because some of those usernames might be email addresses that can identify an individual human.

So, how can the browser have this knowledge without having the usernames?

Bloom filters

Skip this section if you already know what a Bloom filter is.

A Bloom filter is a space efficient data structure for testing if an element is a member of a set. Once the element (e.g., the string “How it’s Made”) is added to the Bloom filter it can later tell you whether it has already seen “How it’s Made” without having to store the literal value – which saves no end of space when we consider that millions of elements can be added to the Bloom Filter.

A simple interface might look like:

> bloomFilter.add("Get your own back")
> bloomFilter.check("Get your own back")
= true
> bloomFilter.check("not found")
= false

Internally the Bloom filter is a bit array. When we add “50/50”, some probabilistic hash functions convert the value to a series of integers. These integers represent indexes of the bit array, and the bits at these indexes are switched to 1.

Later when we check if “50/50” has been seen, the value is passed through the same hash functions returning the same series of indexes. If all bits at those indexes are 1 then it is assumed the value has been seen before. There is risk of false positives: the Bloom filter might erroneously say something has already been encountered.

For hands on, see this interactive example.

Bloom filter for profit

So, by now you see we can add all the used usernames to a server-side Bloom filter, but it might look like we’re no closer to out goal – the bloom filter would still be on the server, and we would still need to send the username to it to check validity. Well, straw man, that’s incorrect!

We can expose the Bloom filter’s internal bit array to the browser so the browser can use it to create a new Bloom filter on the browser, without any further need to talk to the server! The general flow:

  1. Create a micro service that maintains a Bloom filter – adding usernames to the Bloom filter.
  2. Expose the component pieces of the Bloom filter on an endpoint, so the Bloom filter can be re-created elsewhere.
  3. A browser calls the endpoint and creates the client side bloom filter, then uses the client-side bloom filter for validating usernames.

I created a super simple proof-of-concept bloom filter as a service using Node and ExpressJS – which is available on github. I’m a lazy namer so I called it “bloomy”.

Client side

I implemented the client-side code in an Angular app. It looks like this:

// `getBloomy` promise resolves with response from bloomy's endpoint. Note `BloomFilter` is an global object introduced by including the file `public/javascripts/BloomFilter.js`, which is included in the github repo.
getBloomy().then(function(response) {
    myService.bloomFilter = new BloomFilter({capacity: response.capacity});
    myService.bloomFilter.filter = response.filter;
    myService.bloomFilter.salts = response.salts;
// later on, using the client side bloom filter
myService.bloomFilter.check(value) // true or false

The `getBloomy` code is called in the `resolve` property of angular-ui-router FWIW.

Now we have eliminated the need to call the server when the user enters the username – the browser already “knows” if the username has not been taken. Spooky!

There are lots of room for improvement, and my use-case and tech stack will likely be different to yours, but that’s why forking and PR’s were invented 🙂

The solution is not without its problems, however:

  • The server-side and client-side Bloom filter implementations must be exactly the same for exporting and importing of bit arrays to work. I tried making a Python Bloom filter interoperable with Javascript. Take heed it creates only headaches. The easiest thing is to use exactly the same Bloom filter implementation on client and server side – which I have included in the repo.
  • Hackers can extract usernames from the client-side Bloom filter by brute forcing the Bloom filter (“bloomFilter.check(‘a’); bloomFilter.check(‘aa’); bloomFilter.check(‘aaa’)). Brute force attacks on you login endpoint can be rate limited. Not if the Bloom filter is on the client side! This might be solvable with further investigation.
  • Data consistency problems/single source of truth issues: a user can be created on the server after the point the server-side bloom filter was exported. The client-side bloom filters will now be unaware a username is no longer available, and erroneously say the username is valid. This can be worked around by pushing the bloom filter bit array to browsers whenever the bloom filter changes. This can get mad hairy.
  • Once an item is added to the Bloom filter it cannot be removed. This is possibly OK for usernames – as there is a schools of thought that argues usernames should be historically unique. However, if we want to remove items, we can implement a Cuckoo Filter instead.
  • Currently the repo code assumes a max Bloom filter size of 10,000. You might have more. This can be resolved by using a scalable bloom filter.


The “bloom filter as a service” approach replaces 1 tiny user experience problem with a few technical problems which if mismanaged can lead to huge user experience problems such as incorrect validation results.

Moreover, if the use-case is username validation then an attack vector is introduced. Let’s not do that in production. A better usage in real life could be any other client-side validations that involve testing against a whitelist or blacklist such as “adding a tags to a post”, “preventing the user inserting links to naught websites”, etc.

Angular Material Design custom build: include only the parts you need

I reduced the file size of my app’s dependencies by 15% by only including the Angular Material source code my app actually uses. This simple action improved page load speed and this should reduce bounce rate. As the app in question is another app’s marketing app/landing page, this improvement can have real monetary value.

The landing page is built with Angular Material (because so is the app it advertises). Angular Material is heading for feature parity with the components detailed in the Material Design spec. These features are awesome, but the landing page doesn’t use them all.

For the landing page, these unused features are wasteful: user wait for bloat to download that the app never uses. If we strip away unused features from our app then we can make the app load a little bit faster.

This post will explain how to easily make a custom Angular Material build using Gulp that includes only the needed features, without having to pull the Angular Material repo.

Angular Material Design “modules”

The Angular Material bower package comes with a “/modules/js/” folder, which has each of the components split into different source folders:


This makes it super easy for us to build only the components we need. We will need to do a bit of boilerplate to define the angular module that these components will live under.


My build tasks looks something like below. When “buildDependencies” task is ran, it first finishes “buildMaterial” task, which creates “custom.js” in the angular material folder. “buildDependencies” task  then picks up custom.js and concatenate it with the other dependencies.

var insert = require('gulp-insert');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var DIR_JS_LIBS = './assets/js/libs';
var DIR_BUILD = './build/'
gulp.task('buildMaterial', function () {
    var modules = [
    var boilerplate = [
        '(function(){' +
        '    angular.module("ngMaterial", ["ng", "ngAnimate", "ngAria", "material.core", "material.core.theming.palette", "material.core.theming", "material.components.toast"]);' +
        '    })();'
    return gulp.src(modules, {cwd: DIR_JS_LIBS})
        .pipe(concat({path: 'custom.js', cwd: ''}))
        .pipe(gulp.dest(DIR_JS_LIBS + 'angular-material/'));

gulp.task('buildDependencies', ['buildMaterial'], function () {
    var deps = [
    return gulp.src(deps, {cwd: DIR_JS_LIBS})
        .pipe(concat({path: 'deps.min.js', cwd: ''}))


The above example is a stripped down version of the real task. In reality the project uses about 7 components. After minifying I saw I saved about 60kb by using the custom build. This is approximately a 15% reduction.

This is not a huge saving, but it is worth the effort if continual improvement philosophy is followed then many small improvements can lead to great savings.

FWIW, click here to see my project using the custom Angular Material build. Its a marketing app for “Monofox Ask Peers” – basically a slimmed down private stack overflow for schools.

Memoize AJAX requests without data inconsistency

A common problem working with ajax is firing duplicate requests. A common solution is to first check if a request is ongoing. I solved this problem differently by memoizing the get request for the lifetime of the request.

Memoization is storing the results of a function call and returning the cached result when the same inputs occur. This is often used to optimize expensive function calls, but I see value in using it here. In the code below we use Restangular, but the concept works just as well for any promise based request library.

function memoizedService(route) {
    var service = Restangular.service(route);
    var hasher = JSON.stringify;
    var memoizedGetList = _.memoize(service.getList, hasher);
    * Get list, but if there already is a request ongoing with the same
    * params then return the existing request.
    * @param {Object} params
    * @return {promise}

    service.getList = function(params) {
        return memoizedGetList(params).finally(function() {
            delete memoized.cache[hasher(params)];

     return service;
var questionEndpoint = Restangular.memoizedService('question');

Make sure you install underscore with a bit of

bower install underscore --save

Then, to use the feature we would do:

questionEndpoint.getList({page: 1});

If we request page 1 when there is already a request for page 1 then the second `getList` will return the same promise that the first request returned and you will only see 1 request for page 1 in the network panel. Note importantly we also remove the cached request when the request is complete. This prevents data consistency problems: a get request can return different data over time (e.g., more records), and we want to make sure the user receives the most up to date data.

Tell browser when files are updated (per-file cache busting)

This post will explain a simple method to tell the browser to re-download a file when the file has changed.

The problem

A common approach to cache busting when files are updated is to automatically add a “build label” to the url – maybe using requireJs‘s `urlArgs`

    urlArgs: "build={{build_id}}",  //adds ?build=... to each request

When my_file_1.js is requested, the build will be added to the url:


This is fine if all the code is in a single javascript file but a problem arises when the code is split into smaller files. We have 30 files used in our web site:


Later we update “my_file_1.js”, and therefore the build label is also updated, so this happens:


Now 1 file updated but the browser re-downloads all 30 files because the build label changed. 97% of the files have now been needlessly re-downloaded. This will slow down page loads compared to if we only re-download changed files. This problem gets larger if continuous integration is used with many pushes to production each day. Moreover, we also force re-download of any third party libs in use. Performance will surely suffer.

A solution

Instead of using build label for cache busting we will extend requireJs to use the commit hash of the last times each individual file was changed (note: this bit is quite hacky. If anyone has a smarter way to do it then I will update):

requirejs.s.contexts._.realNameToUrl = requirejs.s.contexts._.nameToUrl;
 requirejs.s.contexts._.nameToUrl = function() {
    var url = requirejs.s.contexts._.realNameToUrl.apply(this, arguments);
    if (hashes[url]) {
        return url + '?hash=' +hashes[url]; 
    return url

Before we an use this code we need to create a file containing file paths and the commit hashes, a file that looks like this:

var hashes = {
    "/static/my_file_1.js": "28c72d56",
    "/static/my_file_2.js": "8e4e7740",
    "/static/my_file_30.js": "28c72d56" 

I made a django app that does this (here) . If you’re not using Django then fork the repo and make it work in your environment.

The outcome

See the hashes are added to the static file requests.

Usage in production

We can manually call the script to update the file containing commit hashes and add the resulting file to version control. Thats fine. However, I’m quite forgetful  and forgetting to run the script will result in the browser using old files. As I use ansible for provisioning my servers I just add the following to my ansible script:

- name: getting cache bust hashes
 django_manage: >

And it just works.


  • If you update your static files using git commit –amend then the commit hash will not change.
  • The hack to requirejs will not work if you have multiple contexts in play.

JSON schema validation with Django Rest Framework

Django Rest Framework integrates well with models to provide out of the box validation, and ModelSerializers allow futher fine-grained custom validation. However, if you’re not using a model as the resource of your endpoint then the code required for custom validation of complex data structure can get hairy.

If there is a heavily nested data structure then a serializer can be used that has a nested serializer, which also has a nested serializer, and so on – or a JSON schema and custom JSON parser can be employed.

Using a JSON schema generation tool also provides a quick win: generate the canonical “pattern” of valid JSON structure using data known to be of the correct structure. This post will go through doing this JSON schema validation using Django Rest Framework.

Note my folder structure is like so:



If you find yourself in the following situations then this approach should come in useful:

  • Storing data from external service when you don’t have control over schema and don’t want to replicate it in a database.
  • Data not related to a specific resource.
  • Endpoint for saving data serialized from taffy.
  • Need to store data in flat file instead of database due to a technical constraint.

JSON Schema

We will validate the JSON posted to out endpoint against a JSON schema we define. The JSON schema standard is not yet finalized, but in a mature enough for our usecase. This example uses the jsonschema python package, and the following data:

Note for sake of briefity the example data structure below is simple, but just pretend its complex. If the data structure was as simple as bellow a serializer should be used.

# schemas.py
json = {
    "name": "Product",
    "properties": {
        "name": {
            "type": "string",
            "required": True
        "price": {
            "type": "number",
            "minimum": 0,
            "required": True
        "tags": {
            "type": "array",
            "items": {"type": "string"}
        "stock": {
            "type": "object",
            "properties": {
                "warehouse": {"type": "number"},
                "retail": {"type": "number"}

# The JSON Schema above can be used to test the validity of the JSON code below:
example_data = {
    "name": "Foo",
    "price": 123,
    "tags": ["Bar", "Eek"],
    "stock": {
        "warehouse": 300,
        "retail": 20

For an easy look at validation in practice take a look here


Now to the impliment the DRF endpoint that uses JSON schema validation:

# views.py
from rest_framework.exceptions import ParseError
from rest_framework.response import Response
from rest_framework import views

from . import negotiators, parsers, utils

class ProductView(views.APIView):

    parser_classes = (parsers.JSONSchemaParser,)
    content_negotiation_class = negotiators.IgnoreClientContentNegotiation

    def post(self, request, *args, **kwargs):
            # implicitly calls parser_classes
        except ParseError as error:
            return Response(
                'Invalid JSON - {0}'.format(error.message),
        return Response()
# parsers.py
import jsonschema
from rest_framework.exceptions import ParseError
from rest_framework.parsers import JSONParser

from . import schemas

class JSONSchemaParser(JSONParser):

    def parse(self, stream, media_type=None, parser_context=None):
        data = super(JSONSchemaParser, self).parse(stream, media_type,
            jsonschema.validate(data, schemas.json)
        except ValueError as error:
            raise ParseError(detail=error.message)
            return data
# urls.py
from django.conf.urls.defaults import url
from django.conf.urls import patterns

from . import views

urlpatterns = patterns(
    url(r'^/api/product/$', views.ProductView.as_view(), name='product_view'),

Content negotation

The `parse` method on each parser in `parser_classes` will get called only if the request’s “Content-Type” header has a value that matches the ‘media_type’ attribute on the parser, which means the JSON schema validation will not go ahead if no “Content-Type” header is set. If the schema validation must go ahead, I see a few options:

    • Assign `parser_classes = (PlainSchemaParser, JSONSchemaParser, XMLSchemaParser, YAMLSchemaParser, etc)` on ProductView and define the YAML, XML, etc schemas.
    • Force the view to use JSONSchemaParser parser regardless of if the client requests JSON, XML, etc.

To keep it simple this example will choose the second option by using custom content negotiation (which is pulled directly from DRF docs:

# negotiators.py

from rest_framework.negotiation import BaseContentNegotiation

class IgnoreClientContentNegotiation(BaseContentNegotiation):
    def select_parser(self, request, parsers):
        Select the first parser in the `.parser_classes` list.
        return parsers[0]

    def select_renderer(self, request, renderers, format_suffix):
        Select the first renderer in the `.renderer_classes` list.
        return (renderers[0], renderers[0].media_type)

Posting to the endpoint

lets define a simple webservice:

function createProduct(data){
    $.post('/api/product' data).
            var error = JSON.parse(xhr.responseText);
            console.log('error - ' + error.detail);

    "name": "Foo",
    "price": 123,
    "tags": ["Bar", "Eek"],
    "stock": {
        "warehouse": 300,
        "retail": 20
// OK

    "price": 123,
    "tags": ["Bar", "Eek"],
    "stock": {
        "warehouse": 300,
        "retail": 20
// error - name property is required

    "price": 123,
    "tags": ["Bar", "Eek"],
    "stock": {
        "warehouse": 300,
        "retail": 20
// error - price Number is less then the required minimum value


Word of caution: I find serializers much more maintainable than JSON schemas, so if you are 50/50 of whether to use JSON schema validation or a serializer then I suggest going for the serializer.