Builtins

Analyzing your code

DiffPrivacyInference.typecheck_from_fileMethod
typecheck_from_file(file::AbstractString)

Typecheck the file named file, calling the haskell bcakend. Includes are resolved and parsed as well. The typechecking result will be printed to the REPL. It will be the inferred type of the last statement in the file.

source
DiffPrivacyInference.typecheck_from_stringMethod
typecheck_from_string(code::AbstractString)

Typecheck the julia code passed as a String, calling the haskell bcakend. Includes are resolved and parsed as well. The typechecking result will be printed to the REPL. It will be the inferred type of the last statement in the file.

source

Types

DiffPrivacyInference.DMGradsType

A wrapper for Zygote.Grads, so we can control that only typecheckable operations are executed on the gradient.

Examples

A black-box function computing the gradient of some DMModel, given a loss function loss:

function unbounded_gradient(model::DMModel, d::Vector, l) :: BlackBox()
   gs = Flux.gradient(Flux.params(model.model)) do
           loss(d,l,model)
        end
   return DMGrads(gs)
end
source
DiffPrivacyInference.DMModelType

A wrapper for Flux models, so we can control that only typecheckable operations are executed on the model. What you put inside this wrapper needs to at least support calling Flux.params on it.

Examples

Intialize a Flux neural network:

 DMModel(Flux.Chain(
         Flux.Dense(28*28,40, Flux.relu),
         Flux.Dense(40, 10),
         Flux.softmax))

Note that construction of models cannot be typechecked and needs to happen inside black-box functions that return the model. So a typecheckable function could look like this:

function init_model() :: BlackBox()
   DMModel(Flux.Chain(
           Flux.Dense(28*28,40, Flux.relu),
           Flux.Dense(40, 10),
           Flux.softmax))
end
source
DiffPrivacyInference.DataType

Annotation for real numbers with the discrete metric, i.e. d(a,b) = (a==b) ? 1 : 0 Use it to tell the typechecker you want to infer sensitivity/privacy of a function variable w.r.t. to the discrete metric. An alias for julia's Real type, so you cannot dispatch on it. See the documentation on measuring distance for more info.

source
DiffPrivacyInference.PrivacyFunctionType

Annotation for variables of a function that are privacy functions themselves. You have to annotate privacy function function arguments, otherwise typechecking will assume a non-private function and fail if you insert a privacy function.

Examples

A function that applies the argument privacy function to the other argument.

function appl_priv(f::PrivacyFunction, x) :: Priv()
   f(x)
end
source
DiffPrivacyInference.BlackBoxMethod
BlackBox(T::DataType) :: DataType

Annotation for functions with return type T that cannot be typechecked. Their arguments will be assigned infinite sensitivity. Note that it is not allowed to mutate any of the arguments in a function like this, if you do the typechecking result will be invalid! This method allows any return type.

Examples

A function returning Real and calling an imported qualified name, which is not permissible in non-black-boxes:

loss(X, y, m::DMModel) :: BlackBox(Real) = Flux.crossentropy(m.model(X), y)
source
DiffPrivacyInference.BlackBoxMethod
BlackBox() :: DataType

Annotation for functions that cannot be typechecked. Their arguments will be assigned infinite sensitivity. Note that it is not allowed to mutate any of the arguments in a function like this, if you do the typechecking result will be invalid! This method allows any return type.

Examples

A function calling an imported qualified name, which is not permissible in non-black-boxes:

loss(X, y, m::DMModel) :: BlackBox() = Flux.crossentropy(m.model(X), y)
source
DiffPrivacyInference.MetricGradientMethod
MetricGradient(T, N<:Norm)

Annotate gradients with the desired metric you want them to be measured in by the typechecker. Just maps to DMGrads, so you cannot dispatch on it. See the documentation on measuring distance for more info.

Examples

A function with a gradient argument with specified metric and unspecified output metric:

function sum2(m::MetricGradient(Real, L2)) :: DMGrads
   sum_gradients(m, m)
end
source
DiffPrivacyInference.MetricMatrixMethod
MetricMatrix(T, N<:Norm)

Annotate matrices with the desired metric you want them to be measured in by the typechecker. Just maps to Matrix{T}. See the documentation on measuring distance for more info.

Examples

A function with a matrix argument with specified metric and unspecified output metric:

function sum2(m::MetricMatrix(Real, L2)) :: Matrix{Real}
   m + m
end
source
DiffPrivacyInference.MetricVectorMethod
MetricVector(T, N<:Norm)

Annotate matrices with the desired metric you want them to be measured in by the typechecker. Just maps to Vector{T}, so you cannot dispatch on it. See the documentation on measuring distance for more info.

Examples

A function with a vector argument with specified metric and unspecified output metric:

function sum2(m::MetricVector(Real, L2)) :: Vector{Real}
   m + m
end
source
DiffPrivacyInference.PrivMethod
Priv(T::DataType) :: DataType

Annotation for functions whose differential privacy we want to infer and that return a subtype of T.

Examples

A privacy function with return type Real argument x of type Real whose privacy will be inferred and argument y of type Integer whose privacy we're not interested in:

function foo(x::Real, y::Static(Integer)) :: Priv(Real)
   x
end
source
DiffPrivacyInference.PrivMethod
Priv() :: DataType

Annotation for functions whose differential privacy we want to infer. This method denotes the function can return any type.

Examples

A privacy function with argument x whose privacy will be inferred and argument y of type Integer whose privacy we're not interested in:

function foo(x, y::Static(Integer)) :: Priv()
   x
end
source
DiffPrivacyInference.StaticMethod
Static(T::DataType) :: DataType

Annotation for function arguments whose privacy is of no interest to us. Argument T denotes the type annotation we give for this argument. Their privacy will most likely be set to infinity to allow tighter bounds on other arguments.

Examples

A privacy function with argument x whose privacy will be inferred and argument y of type Integer whose privacy we're not interested in:

function foo(x, y::Static(Integer)) :: Priv()
   x
end
source
DiffPrivacyInference.StaticMethod
Static() :: DataType

Annotation for function arguments whose privacy is of no interest to us and for which we do not give type annotations. Their privacy will most likely be set to infinity to allow tighter bounds on other arguments.

Examples

A privacy function with argument x whose privacy will be inferred and argument y whose privacy we're not interested in:

function foo(x, y::Static()) :: Priv()
   x
end
source

Sensitivity functions

DiffPrivacyInference.clip!Method
clip!(l::Norm, g::DMGrads) :: Nothing

Clip the gradient, i.e. scale by 1/norm(g,l) if norm(g,l) > 1. Mutates the gradient, returns nothing.

source
DiffPrivacyInference.clipMethod
clip(l::Norm, g::AbstractVector)

Return a clipped copy of the input vector, i.e. scale by 1/norm(g,l) if norm(g,l) > 1.

source
DiffPrivacyInference.clipMethod
clip(l::Norm, g::DMGrads) :: DMGrads

Return a clipped copy of the gradient, i.e. scale by 1/norm(g,l) if norm(g,l) > 1.

source
DiffPrivacyInference.clipnMethod
clipn(v::T, upper::T, lower::T) where T <: Number

Clip the number v, i.e. return v if it is in [lower,upper], return upper if v is larger than upper, and return lower if v is smaller than lower.

source
DiffPrivacyInference.cloneMethod
clone(g::DMGrads)

Create and return a copy of a DMGrads object, where only the gradient part of the Zygote gradient is copied while the part pointing to the parameters of a model is kept. Thus we get an object that we can mutate safely while retaining information on which entry of the gradient belongs to which parameter of which model. If you want to return a DMGrads object from a function, you have to return a copy.

Examples

A function returning a copy of the gradient object:

function compute_and_scale_gradient(model::DMModel, d, l) :: BlackBox()
   gs = unbounded_gradient(model, d, l)
   scale_gradient!(100, gs)
   return clone(gs)
end
source
DiffPrivacyInference.norm_convertMethod
norm_convert(n::Norm, m)

Return a copy of m, but tell the typechecker to measure a matrix with a different norm n. See the documentation on measuring distance for more info.

Examples

Use norm_convert to measure the matrix in L1 norm instead of L2 norm.

function foo(m::MetricMatrix(Real, L2))
    norm_convert(L1, m)
end

The assigned type is:

{
|   - Matrix<n: L2, c: C>[s₂ × n]{τ₂₁}
|       @ √(n)
|   --------------------------
|    -> Matrix<n: L1, c: C>[s₂ × n]{τ₂₁}
}

You can see that we paid a sensitivity penalty of √n where n is the number of rows of m.

source
DiffPrivacyInference.reduce_colsMethod
reduce_cols(f::Function, m::AbstractMatrix)

Apply the privacy function f :: (r x 1)-Matrix -> T to each column of the (r x c)-Matrix m, return a vector of the results. If f is (eps,del)-private in its argument, the reduction is (r*eps, r*del)-private in m.

source
DiffPrivacyInference.scale_gradient!Method
scale_gradient!(s::Number, gs::DMGrads) :: Nothing

Scale the gradient represented by the Zygote.Grads struct wrapped in the input DMGrads gs by the scalar s. Mutates the gradient, returns nothing.

source
DiffPrivacyInference.subtract_gradient!Method
subtract_gradient!(m::DMModel, gs::DMGrads) :: Nothing

Subtract the gradient represented by the Zygote.Grads struct wrapped in the input DMGrads gs from the parameters of the model m. Mutates the model, returns nothing.

source
DiffPrivacyInference.unboxMethod
unbox(x, T, s)

Annotate a value that results from a call to a black box function with the return container type T and size s. Every call to black box functions needs to be wrapped in an unbox statement. If the returned type does not match the annotation, a runtime error will be raised.

Examples

product(x, y) :: BlackBox() = x * y'
function compute_product(x,y)
   dx = length(x)
   dy = length(y)
   l = unbox(product(x,y), Matrix{<:Real}, (dx,dy))
   l
end
source
DiffPrivacyInference.unboxMethod
unbox(x, T)

Annotate a value that results from a call to a black box function with the return container type T. Every call to black box functions needs to be wrapped in an unbox statement. If the returned type does not match the annotation, a runtime error will be raised.

Examples

loss(X, y) :: BlackBox(Real) = Flux.crossentropy(X, y)
function compute_loss(X,y)
   l = unbox(loss(X,y), Real)
   l
end
source
DiffPrivacyInference.undisc_container!Method

undisc_container!(m::T) :: T

Make a clipped vector/gradient measured using the discrete metric into a vector/gradient measured with the clipping norm instead. Does not change the value of the argument. It can be used to enable using a gradient obtained from a black box computation (hence being in discrete-norm land) to be put into e.g. the gaussian mechanism (which expects the input to be in L2-norm land). See the documentation on measuring distance for more info.

Example

Clip and noise a gradient, mutating the input.

function noise_grad!(g::MetricGradient(Data, LInf), eps, del) :: Priv()
    clip!(L2,g)
    undisc_container!(g)
    gaussian_mechanism!(2, eps, del, g)
    return
end
source
DiffPrivacyInference.undisc_containerMethod

undisc_container(m::T) :: T

Make a clipped vector/gradient measured using the discrete norm into a vector/gradient measured with the clipping norm instead. Does not change the value of the argument. It can be used to enable using a gradient obtained from a black box computation (hence being in discrete-norm land) to be put into e.g. the gaussian mechanism (which expects the input to be in L2-norm land). See the documentation on measuring distance for more info.

Example

Clip and noise a gradient, not mutating the input.

function noise_grad(g::MetricGradient(Data, LInf), eps, del) :: Priv()
      cg = clip(L2,g)
      ug = undisc_container(cg)
      gaussian_mechanism(2, eps, del, ug)
end
source

Privacy functions

DiffPrivacyInference.above_thresholdMethod
above_threshold(queries :: Vector{Function}, epsilon :: Real, d, T :: Number) :: Integeri

The above-threshold mechanism. Input is a vector of 1-sensitive queries on dataset d mapping to the reals. Returns the index of the first query whose result at d plus (4/epsilon)-Laplacian noise is above the given threshold T plus (2/epsilon)-Laplacian noise. This is (epsilon,0)-private in d!

source
DiffPrivacyInference.gaussian_mechanism!Method
gaussian_mechanism!(s::Real, ϵ::Real, δ::Real, g::DMGrads) :: Nothing

Apply the gaussian mechanism to the input gradient, adding gaussian noise with SD of (2 * log(1.25/δ) * s^2) / ϵ^2) to each gradient entry seperately. This introduces (ϵ, δ)-differential privacy to all variables the gradient depends on with sensitivity at most s. Mutates the gradient, returns nothing.

The implementation follows the 2021 paper Secure Random Sampling in Differential Privacy by NAOISE HOLOHAN and STEFANO BRAGHIN. It mitigates some floating point related vulnerabilities, but not all the known ones.

Example

Clip and noise a gradient, mutating the input.

function noise_grad!(g::MetricGradient(Data, LInf), eps, del) :: Priv()
    clip!(L2,g)
    undisc_container!(g)
    gaussian_mechanism!(2, eps, del, g)
    return
end

See the flux-dp example for a full-blown implementation of private gradient descent using this mechanism.

source
DiffPrivacyInference.gaussian_mechanismMethod
gaussian_mechanism(s::Real, ϵ::Real, δ::Real, g)

Apply the gaussian mechanism to the input, adding gaussian noise with SD of (2 * log(1.25/δ) * s^2) / ϵ^2). This introduces (ϵ, δ)-differential privacy to all variables the input depends on with sensitivity at most s. Makes a copy of the input and returns the noised copy.

The implementation follows the 2021 paper Secure Random Sampling in Differential Privacy by NAOISE HOLOHAN and STEFANO BRAGHIN. It mitigates some floating point related vulnerabilities, but not all the known ones.

Example

Clip and noise a gradient, not mutating the input.

function noise_grad(g::MetricGradient(Data, LInf), eps, del) :: Priv()
      cg = clip(L2,g)
      ug = undisc_container(cg)
      gaussian_mechanism(2, eps, del, ug)
end

See the flux-dp example for a full-blown implementation of private gradient descent using this mechanism.

source
DiffPrivacyInference.laplacian_mechanism!Method
laplacian_mechanism!(s::Real, ϵ::Real, g::DMGrads) :: Nothing

Apply the laplacian mechanism to the input, adding laplacian noise with scaling parameter of (s / ϵ) and location zero to each gradient entry seperately. This introduces (ϵ, 0)-differential privacy to all variables the input depends on with sensitivity at most s. Mutates the input, returns nothing.

The implementation follows the 2021 paper Secure Random Sampling in Differential Privacy by NAOISE HOLOHAN and STEFANO BRAGHIN. It mitigates some floating point related vulnerabilities, but not all the known ones.

Example

Clip and noise a gradient, mutating the input.

function noise_grad!(g::MetricGradient(Data, LInf), eps) :: Priv()
    clip!(L2,g)
    undisc_container!(g)
    laplacian_mechanism!(2, eps, g)
    return
end
source
DiffPrivacyInference.laplacian_mechanismMethod
laplacian_mechanism(s::Real, ϵ::Real, g)

Apply the laplacian mechanism to the input, adding laplacian noise with scaling parameter of (s / ϵ) and location zero to each gradient entry seperately. This introduces (ϵ, 0)-differential privacy to all variables the input depends on with sensitivity at most s. Makes a copy of the input, then noises and returns the copy.

The implementation follows the 2021 paper Secure Random Sampling in Differential Privacy by NAOISE HOLOHAN and STEFANO BRAGHIN. It mitigates some floating point related vulnerabilities, but not all the known ones.

Example

Clip and noise a matrix, not mutating the input.

function noise_grad(g::MetricMatrix(Data, LInf), eps) :: Priv()
    cg = clip(L2,g)
    ug = undisc_container(cg)
    laplacian_mechanism(2, eps, ug)
end
source
DiffPrivacyInference.parallel_private_fold_rowsMethod
parallel_private_fold_rows(f::Function, i, m::AbstractMatrix, n::AbstractMatrix)

Fold the privacy function f :: Vector -> Vector -> I -> I over the two input matrices' rows simultaneously. This is parallel composition on the rows of m and n, so if f is (eps,del)-private in it's first two arguments, the fold is (eps,del)-private in the input matrices. The input matrices are expected to be measured in the discrete norm.

source
DiffPrivacyInference.sampleMethod
sample(n::Integer, m::AbstractMatrix, v::AbstractMatrix) :: Tuple{Matrix, Matrix}

Take a uniform sample (with replacement) of n rows of the matrix m and corresponding rows of matrix v. Returns a tuple of n-row submatrices of m and v.

source