How to do it...

Perform the following steps:

  1. Create a new function using one of the default templates named Azure Blob Storage Trigger.
  2. Next, you need to provide the name of the Azure Function, along with the Path and Storage account connection. We will upload a picture to the Azure Blob Storage trigger (image) container (mentioned in the Path parameter in the following screenshot) at the end of this section:

Note that while creating the function, the template creates one Blob Storage Table output binding and allows us to provide the name of the Table name parameter. However, we cannot assign the name of the parameter while creating the function. We will be able to change it after it is created. Once you have reviewed all the details, click on the Create button to create the Azure Function.

  1. Once the function has been created, navigate to the Integrate tab, click on New Output, and choose Azure Table Storage. Then click on the Select button. Provide the parameter values and then click on the Save button, as shown in the following screenshot:

  1. Let's create another Azure Table Storage output binding to store all the information for women by clicking on the New Output button in the Integrate tab, selecting Azure Table Storage, and clicking on the Select button. This is what it looks like after providing the input values:

  1. Once you have reviewed all the details, click on the Save button to create the Azure Table Storage output binding and store the details about women.
  2. Navigate to the code editor of the Run method of the LocateMaleFemaleFaces function, then add the outMaleTable and outFemaleTable parameters. The following code grabs the image stream that is uploaded to the blob, which is then passed as an input to Cognitive Services, which returns a JSON with all the face information. Once the face information, including coordinates and gender details, is received, we store the face coordinates in the respective Table Storage using the table output bindings:
#r "Newtonsoft.Json"
#r "Microsoft.WindowsAzure.Storage"

using Newtonsoft.Json;
using Microsoft.WindowsAzure.Storage.Table;
using System.IO;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
public static async Task Run(Stream myBlob,
string name,
IAsyncCollector<FaceRectangle> outMaleTable,
IAsyncCollector<FaceRectangle> outFemaleTable,
ILogger log)
{
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
string result = await CallVisionAPI(myBlob);
log.LogInformation(result);
if (String.IsNullOrEmpty(result))
{
return;
}
ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
foreach (Face face in imageData.Faces)
{
var faceRectangle = face.FaceRectangle;
faceRectangle.RowKey = Guid.NewGuid().ToString();
faceRectangle.PartitionKey = "Functions";
faceRectangle.ImageFile = name + ".jpg";
if(face.Gender=="Female")
{
await outFemaleTable.AddAsync(faceRectangle);
}
else
{
await outMaleTable.AddAsync(faceRectangle);
}
}
}
static async Task<string> CallVisionAPI(Stream image)
{
using (var client = new HttpClient())
{
var content = new StreamContent(image);
var url = "https://westeurope.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Faces&language=en";
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", Environment.GetEnvironmentVariable("Vision_API_Subscription_Key"));
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
var httpResponse = await client.PostAsync(url, content);

if (httpResponse.StatusCode == HttpStatusCode.OK)
{
return await httpResponse.Content.ReadAsStringAsync();
}
}
return null;
}
public class ImageData
{
public List<Face> Faces { get; set; }
}
public class Face
{
public int Age { get; set; }
public string Gender { get; set; }
public FaceRectangle FaceRectangle { get; set; }
}
public class FaceRectangle : TableEntity
{
public string ImageFile { get; set; }
public int Left { get; set; }
public int Top { get; set; }
public int Width { get; set; }
public int Height { get; set; }
}
  1. Let's add a condition (highlighted in bold in the code mentioned in step 10) to check the gender and, based on the gender, store this information in the respective Table Storage.
  2. Create a new blob container named images using Azure Storage Explorer, as shown in the following screenshot:

  1. Let's upload a picture with male and female faces to the container named images using Azure Storage Explorer, as shown here:

  1. The function is triggered as soon as you upload an image. This is the JSON that was logged in the Logs console of the function:
        {  
"requestId":"483566bc-7d4d-45c1-87e2-6f894aaa4c29",
"metadata":{ },
"faces":[
{
"age":31,
"gender":"Female",
"faceRectangle":{
"left":535,
"top":182,
"width":165,
"height":165
}
},
{
"age":33,
"gender":"Male",
"faceRectangle":{
"left":373,
"top":182,
"width":161,
"height":161
}
}
]
}
If you are a frontend developer with expertise in HTML5 and canvas-related technologies, you can even draw squares that locate the faces in an image by using the information provided by Cognitive Services.
  1. The function has also created two different Azure Table Storage tables, as shown in the following screenshot: