text
stringlengths
256
65.5k
# -*- coding: utf-8 -*- ## Copyright 2019 Trevor van Hoof and Jan Pijpers. ## Licensed under the Apache License, Version 2.0 ## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers ## See the license file attached or on https://www.janpijpers.com/script-licenses/ ''' Name: mayaUndoExample Description: two undo examples one on how to use the undo stack as decorator function and as inside function ''' from maya import cmds import traceback import pymel.core as pm def undo( function ): ''' This is a function decorator. you can use it by writing @undo one line obove any function. before the function gets called an undo chunk is started. when the function ends it gets closed. Be carefull, if you call the function in itself (recursively) it will break the undo stack. :param function: :return: ''' def funcCall(*args,**kwargs): result = None try: ## here we open the undo chunk and we give it the name of the fuction cmds.undoInfo( openChunk= True, chunkname = function.__name__ ) result = function( *args,**kwargs ) except Exception as e: ## If we have an error we will print the stack of the error print traceback.format_exc() ## we also make sure the maya ui shows an error. pm.displayError( "## Error, see script editor: %s"%e ) finally: ## we always need to close the chunk at the end else we corrupt the stack. cmds.undoInfo( closeChunk = True ) return result return funcCall @undo def simpleExampleWithDecorator(): ''' So here we have the decorator defined above the function definition. So before this function is called, an undo chunk is created and closed after the function has finished. :return: ''' for i in xrange( 10 ): loc = cmds.createNode( "spaceLocator" ) cmds.xform(loc, translation=(i,i,i)) simpleExampleWithDecorator() def undoInFunction( ): ## its recomended to always use a try and except with an undo chunk else you need to restart maya when it fails. ## here we open the undo chunk and we give it the name of the fuction cmds.undoInfo(openChunk=True, chunkname="Example") try: for i in xrange(10): loc = cmds.createNode("spaceLocator") cmds.xform(loc, translation=(i, i, i) ) except Exception as e: ## If we have an error we will print the stack of the error print traceback.format_exc() ## we also make sure the maya ui shows an error. pm.displayError("## Error, see script editor: %s" % e) finally: cmds.undoInfo(openChunk=True, chunkname="Example") undoInFunction()
In this post, we look at how to deploy a FastAPI application on a serverless platform. The platform that we deploy to is vercel. Vercel provides a generous free tier to host our application. The application we build for this post is a GitHub view counter, which counts the number of page visits. Serverless is not server-less. It is a kind of platform where the developer worries less about the deployment. Serverless allows you to spend all the time on the development of your application. When it comes to deployment, push, and the serverless platform takes care of the rest. No server management- The application runs on the server, but the developer need not worry about the server setup Pay for what you use- Serverless platform charges for what you use, like CPU, function calls, etc. Some even calculate the price down to the millisecond. Automatic Scaling- They scale up and down according to demand Quick deployment- Push the repository/workspace to deploy. Serverless often deploys in quick time, and this makes it easy to ship updates. Low latency- The code can run closer to the user. Testing is a pain- It isn't easy to replicate the serverless environment. Short-running process- The free tier of vercel only provides 10 second per function call(a request-response cycle). This is convenient for some tasks, this is convenient, but you cannot run any background tasks that run long. Cold start- If functions are not called for some time, the serverless code goes to sleep to save memory and CPU usage. When a request is made, the code wakes up(cold start). This creates performance issues because of the slow response. If the serverless code is consistently running, there won't be any performance issues. A GitHub view counter is a simple application that counts the number of times a page is viewed. VC became popular after GitHub introduced profile READMEs. The counter counts when a request is sent to an endpoint. It responds by sending an SVG that contains the count. Images/SVG are the only way to fetch external sources into a markdown flavored README file. Since these assets are fetched over insecure http call, GitHub proxies all <img> calls to a server running on heroku, now all the requests come from one source, the proxy server. This makes it very difficult to count the unique users, so we count the number of times the resource was fetched. pip install fastapi We don't need a server here, as serverless will take care of it. We need to get an SVG to send back to the user. You should see a custom badge form to navigate to https://shields.io and scroll down. Type in the labels as Profile Views and some random value for the label 26. Now click generate. This should generate a badge in a new tab. Now right click and view page source to get the HTML template for the badge. Create a lambda function to take in a variable count and replace all instances of 26 with {count}(use f-strings). SVG_STRING = ( lambda count: f""" <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="104" height="20" role="img" aria-label="Profile Views: {count}"> <title>Profile Views: {count} </title> <linearGradient id="s" x2="0" y2="100%"> <stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/> </linearGradient> <clipPath id="r"> <rect width="104" height="20" rx="3" fill="#fff"/> </clipPath> <g clip-path="url(#r)"> <rect width="81" height="20" fill="#555"/> <rect x="81" width="23" height="20" fill="#97ca00"/> <rect width="104" height="20" fill="url(#s)"/> </g> <g fill="#fff" text-anchor="middle" font-family="Verdana,Geneva,DejaVu Sans,sans-serif" text-rendering="geometricPrecision" font-size="110"> <text aria-hidden="true" x="415" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="710"> Profile Views </text> <text x="415" y="140" transform="scale(.1)" fill="#fff" textLength="710"> Profile Views </text> <text aria-hidden="true" x="915" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="130"> {count} </text> <text x="915" y="140" transform="scale(.1)" fill="#fff" textLength="130"> {count} </text> </g> </svg> """ ) I used a simple dict for the database. But when the serverless created multiple instances of the functions, it created multiple copies of the dict with different values—this lead to returning different values on each request. Using an external database works best. We'll use Redis. Head over to https://redislabs.com/ and create a free account. Login and choose Redis Enterprise Cloud. Create a new subscription on any cloud service(AWS, GCP, Azure) by choosing the free plan(30MB). Once the subscription is created, copy the endpoint and password and store them in a secure location. Install Redis for python. pip install redis Create a Redis connection in app.py db = Redis( host=getenv("REDIS_ENDPOINT"), port=getenv("REDIS_PORT"), password=getenv("REDIS_PASSWORD"), ) Now we define the counter endpoint as, @app.get("/api/") def user_count(username: str = "amalshaji", title: str = "Profile"): title = title.capitalize() count = db.get(username) if count is not None: db.incr(username, amount=1) count = count.decode() return Response( content=SVG_STRING(title, count + 1), media_type="image/svg+xml" ) db.set(username, 1) return Response(content=SVG_STRING(title, 1), media_type="image/svg+xml") The route /api takes two parameters, username and title. The username defaults to amalshaji, and the title defaults to Profile. You can use a random id as username and title as Project to track a particular project's view count. Create an account on vercel. After creating the account, install vercel-cli and login npm i -g vercelvercel login Now create a vercel.json in the project directory to setup vercel configuration. // vercel.json { "version": 2, "builds": [ { "src": "app.py", "use": "@now/python" } ], "routes": [ { "src": "(.*)", "dest": "app.py" } ] } Vercel lets you create a serverless environment using the vercel dev command. In my case, it was always failing, even though the code was deployed successfully. Before running the application, we need to set up environment variables. Navigate to the project dashboard(in my case https://vercel.com/amalshaji/pvc). Click on the Settings tab and choose Environment Variables. Set the three environment variables, REDIS_ENDPOINT, REDIS_PORT, and REDIS_PASSWORD. Redis port is not provided separately. The endpoint is in the format URL: PORT. Vercel automatically encrypts all environment variables. Once this is done, deploy the preview app by running vercel. Once the deployment is finished, it'll provide a preview link. Navigate to the preview link and test the application. Once satisfied, push the preview deployment to production using vercel --prod That's it. The profile view counter is successfully running, in my case, at https://pvc.vercel.app. Code Response ![](https://pvc.vercel.app/api/?username=amalshaji) ![](https://pvc.vercel.app/api/?username=xsdf434&title=project) This counter can be faked. There is no point in faking the numbers by spamming. A billion views mean the counter hit 1 billion times. Instructions for self-hosting will be available on the GitHub repo. You can tweak to fit your use case. GitHub: amalshaji/pvc Read about vercel limits on free tier here If you have setup Redis or any other database, you can directly deploy by clicking the button below,
forループを使って以下のように実行するとうまく結果が出ました。 コード l=["Mon","tue","Wed","sat"] b=[] for a in l: a=a.upper() b.append(a) print(b) 実行結果 'MON', 'TUE', 'WED', 'SAT' しかし、以下のようにlist bをのぞいてprint(l)とすると、以下の実行結果となりました。 なぜこのような違いになるのでしょうか? リストの中のstringを変数aに入れて大文字に処理をした後、新たにリストを作る必要があるのでしょうか? コード l=["Mon","tue","Wed","sat"] for a in l: a=a.upper() print(l) 実行結果 'Mon', 'tue', 'Wed', 'sat'
Aquí está el código que está ejecutando mi servidor flask: from flask import Flask, make_response import os app = Flask(__name__) @app.route("/") def index(): return str(os.listdir(".")) @app.route("/") def getFile(file_name): response = make_response() response.headers["Content-Disposition"] = ""\ "attachment; filename=%s" % file_name return response if __name__ == "__main__": app.debug = True app.run("0.0.0.0", port = 6969) Si el usuario va al sitio, imprime los archivos en el directorio. Sin embargo, si va al sitio: 6969 / filename, debería descargar el archivo. Sin embargo, estoy haciendo algo mal, ya que el tamaño del archivo siempre es de 0 bytes y el archivo descargado no contiene datos. Alguna idea. Intenté agregar el encabezado de longitud del contenido y eso no funcionó. No sé qué más podría ser. Todo lo que hace el encabezado es decirle al navegador que trate los datos de respuesta como un archivo descargable con un nombre determinado. En realidad, no establece ningún dato de respuesta, por lo que está en blanco. Necesitará establecer el contenido del archivo en la respuesta para que funcione. @app.route("/") def getFile(file_name): headers = {"Content-Disposition": "attachment; filename=%s" % file_name} with open(file_name, 'r') as f: body = f.read() return make_response((body, headers)) EDITAR – Limpie el código un poco basado en api docs Como escribió Danny, no proporcionas ningún contenido en tu respuesta, por eso obtienes 0 bytes. Sin embargo, hay una función fácil send_file en Flask para devolver el contenido del archivo: from flask import send_file @app.route("/") def getFile(file_name): return send_file(file_name, as_attachment=True) Tenga en cuenta que el nombre de file_name es relativo a la ruta de la raíz de la aplicación ( app.root_path ) en este caso.
We're relying heavily on Observable for the most recent edition of Berkeley MIDS W209: Information Visualization.While Observable is a hosted tool, there are a handful of tricks that make is useful for enterprise applications (where data needs to reside locally).One of these is relying on locally accessible data: Observable runs in your browser, so it can access local files if you serve them or an entire local webapp or API.Here, we'll connect an observable front-end to a both some simple file-serving web servers and to a Python Flask server running on my laptop to demonstrate how this is possible. Basic Flask setup For more details, follow along with the Flask quickstart. We'll start with the hello world app there: # hello.py from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' Which we run in debug mode with FLASK_ENV=development FLASK_APP=hello.py flask run skip the FLASK_ENV option if you don't want debug mode.Now that's not doing much for us,so let's make a API: from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' @app.route('/api/') @app.route('/api/<filename>/') def api(filename=None): if filename: # you could go load a file # json.loads(Path(filename).read_text()) d = {'data': [1, 2, 3]} else: # maybe go query a database? d = {'query_result': ['a', 'b', 'c']} # we just need to return the dict object # flask will treat returned dict objects as json responses # (flask will treat returned strings, like from hello_world(), as response bodies with default headers and content type) return d Cool, now we have an API in Flask.In the response header, it set the Content-Type: application/json (rather than the standard HTML page type of Content-Type: text/html; charset=utf-8; FWIW I just copied both of those from the Network tab of Chrome's Inspector tool).Of course you could do anything you want inside of api() or hello_world() functions,like run a Tensorflow model on some data that was just uploaded via POST request,and return the score from the model as an JSON "API" response. CORS The first thing we'll have to think about is Cross-Origin Resource Sharing. Web browsers protect users by not allowing a website to host the content of a different site unless that site specifically allows it. That means that I can't run githubb.com, show the github.com site's pages, and collect all of the details entered through my proxy site. The github.com pages come back with headers that don't explicitly allow my site, githubb.com, and so it's not allowed (this operates as a white list). All this to say that our local site that hosting files will need to allow specifically observablehq.com to use them!We can be a little more relaxed and just allow any site to use our locally hosted files, by allowing * if we want (these are only visible on our computer). Options for CORS with Flask:- Just return (response, {'Access-Control-Allow-Origin', 'observablehq.com'}).- Or if you're included the status, (response, 200, {'Access-Control-Allow-Origin', '*'}). For our hello world, that would be: def hello_world(): return 'Hello, World!', 200, {'Access-Control-Allow-Origin': '*'} Options for CORS just serving files with node http-server:- Just set the --cors flag. For CORS just serving files with python3 -m http.server, there is no simple option.You can wrap the simple server, but at this point, I'd recommend just using one of the two above.Since we don't need to allow * and we're setting the header directly, we'll just allow Observable.The Python http.server documentation shows us how we can do it (note, this is a bit simpler than the solution provided in the SO post linked to above): from http.server import HTTPServer, SimpleHTTPRequestHandler class CORSRequestHandler(SimpleHTTPRequestHandler): def end_headers(self): self.send_header('Access-Control-Allow-Origin', 'observablehq.com') SimpleHTTPRequestHandler.end_headers(self) httpd = HTTPServer(('localhost', 8080), CORSRequestHandler) httpd.serve_forever() SSL Again for our own good, most browsers won't allow a site running in HTTPS mode (a site that served content encrypted by SSL) to load assets from HTTP endpoints. If they did, it might look like a site is secure with the "lock" icon, all while it's making unencrypted transactions with our data under the hood. Observable is only serving on HTTPS, so we also need to serve with HTTPS. Flask, http-server, and Python3's simple server all serve without SSL by default.For any of these, we can wrap them in ngrok.This is probably the simplest solution for development:- Go create an account at ngrok.- Download the executable for your platform.- mv ngrok /usr/local/bin (or somewhere on your PATH).- Set you authtoken: ngrok authtoken 1h3V....- Run it: ngrok http 8080, using the port of the service you want to expose. Done! Only the final step is necessary each time you want to use it. To get started with SSL in any other option,first we can generate a local key pair by following instructions here.First, I create a file domains.ext: authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = localhost DNS.2 = fake1.local DNS.3 = fake2.local Then run the following 5 commands: openssl req -x509 -nodes -new -sha256 -days 1024 -newkey rsa:2048 -keyout RootCA.key -out RootCA.pem -subj "/C=US/CN=W209-CA" openssl x509 -outform pem -in RootCA.pem -out RootCA.crt openssl req -new -nodes -newkey rsa:2048 -keyout localhost.key -out localhost.csr -subj "/C=US/ST=Massachusetts/L=Amherst/O=Example-Certificates/CN=localhost.local" openssl x509 -req -sha256 -days 90 -in localhost.csr -CA RootCA.pem -CAkey RootCA.key -CAcreateserial -extfile domains.ext -out localhost.crt open RootCA.crt The last command opens the cert in Mac's Keychain Access. You need to double click the new key, open the "Trust" section, and always trust this CA. Options for SSL just serving files with node http-server:- We can just set the --ssl/S flag, and pass cert/key with --cert and --key options. http-server -S --cert localhost.crt --key localhost.key Options for SSL with Python's simple server (ref): from http.server import HTTPServer, SimpleHTTPRequestHandler import ssl httpd = HTTPServer(('localhost', 4443), SimpleHTTPRequestHandler) httpd.socket = ssl.wrap_socket(httpd.socket, keyfile="localhost.key", certfile="localhost.crt", server_side=True) httpd.serve_forever() Or we can switch to using Python Twisted: twistd -no web --path # with SSL twistd -no web --path –https=443 -c localhost.crt -k localhost.key Flask from the CLI: FLASK_ENV=development FLASK_APP=hello.py flask run --cert localhost.crt --key localhost.key There is an option to generate self-signed cert on the fly,but it didn't work for me (NET::ERR_CERT_AUTHORITY_INVALID): FLASK_ENV=development FLASK_APP=hello.py flask run --cert=adhoc All together The simplest solution for files:we can serve local files to Observable with two lines (and the ngrok setup) with just: http-server --cors ngrok http 8080 and then pointing Observable to the ngrok https URI it gives you. It should be clear how to combine any of the CORS solutions with ngrok for https. To combine the CORS and SSL, with the self signed certs: first generate the certs as above, then: Node simple server (http-server) http-server --cors -S --cert localhost.crt --key localhost.key Python simple server from http.server import HTTPServer, SimpleHTTPRequestHandler import ssl class CORSRequestHandler(SimpleHTTPRequestHandler): def end_headers(self): self.send_header('Access-Control-Allow-Origin', 'observablehq.com') SimpleHTTPRequestHandler.end_headers(self) httpd = HTTPServer(('localhost', 8080), CORSRequestHandler) httpd.socket = ssl.wrap_socket(httpd.socket, keyfile="localhost.key", certfile="localhost.crt", server_side=True) httpd.serve_forever() Then run from the command line python3 script.py Flask Already done!Just take hello.py with the CORS headers,and run it with the certs passed in. Or you can run python hello.py and set the certs inside the script: if __name__ == '__main__': app.run(debug=True, ssl_context=('localhost.crt', 'localhost.key')) Connect to Observable The last part is just fun now.Spin up an Observable notebook,import d3 (d3 = require("d3@6")),and load our Flask API: d3.json("https://localhost:5000/api/")
NewerOlder 1 2 from workflows.security import safeOpen import cPickle 3 import json 4 5 import sys 6 from workflows import module_importer 7 8 def setattr_local(name, value, package): setattr(sys.modules[__name__], name, value) 9 module_importer.import_all_packages_libs("library",setattr_local) 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 def test_interaction(input_dict): return input_dict def create_list(input_dict): return input_dict def add_multiple(input_dict): output_dict = {} output_dict['sum'] = 0 for i in input_dict['integer']: output_dict['sum'] = int(i)+output_dict['sum'] return output_dict def delay(input_dict,widget): widget.progress=0 widget.save() timeleft = int(input_dict['time']) i = 0 import time import math while i<timeleft: time.sleep(1) i=i+1 widget.progress = math.floor(((i*1.0)/timeleft)*100) widget.save() widget.progress=100 widget.save() output_dict = {} output_dict['data'] = input_dict['data'] return output_dict def load_file(input_dict): return input_dict def file_to_string(input_dict): f = safeOpen(input_dict['file']) output_dict = {} output_dict['string']=f.read() return output_dict def load_to_string(input_dict): ''' Opens the file and reads its contents into a string. ''' f = safeOpen(input_dict['file']) output_dict = {} output_dict['string']=f.read() return output_dict def pickle_object(input_dict): ''' Serializes the input object. ''' pkl_obj = cPickle.dumps(input_dict['object']) output_dict = {} output_dict['pickled_object'] = pkl_obj return output_dict def unpickle_object(input_dict): ''' Serializes the input object. ''' obj = cPickle.loads(str(input_dict['pickled_object'])) output_dict = {} output_dict['object'] = obj return output_dict def call_webservice(input_dict): from services.webservice import WebService ws = WebService(input_dict['wsdl'],float(input_dict['timeout'])) selected_method = {} for method in ws.methods: if method['name']==input_dict['wsdl_method']: selected_method = method function_to_call = getattr(ws.client,selected_method['name']) ws_dict = {} for i in selected_method['inputs']: try: ws_dict[i['name']]=input_dict[i['name']] if ws_dict[i['name']] is None: pass if i['type'] == bool: if input_dict[i['name']]=="true": ws_dict[i['name']]=1 else: ws_dict[i['name']]=0 if ws_dict[i['name']] == '': if input_dict['sendemptystrings']=="true": ws_dict[i['name']] = '' else: ws_dict.pop(i['name']) except Exception as e: print e ws_dict[i['name']]='' print ws_dict results = function_to_call(**ws_dict) output_dict=results return output_dict def multiply_integers(input_dict): product = 1 for i in input_dict['integers']: product = product*int(i) output_dict={'integer':product} return output_dict def filter_integers(input_dict): return input_dict def filter_integers_post(postdata,input_dict,output_dict): try: output_dict['integers'] = postdata['integer'] except: pass return output_dict def create_integer(input_dict): output_dict = {} output_dict['integer'] = input_dict['integer'] return output_dict def create_string(input_dict): return input_dict def concatenate_strings(input_dict): output_dict = {} j = len(input_dict['strings']) for i in range(j): input_dict['strings'][i]=str(input_dict['strings'][i]) output_dict['string'] = input_dict['delimiter'].join(input_dict['strings']) return output_dict def display_string(input_dict): return {} def add_integers(input_dict): output_dict = {} output_dict['integer'] = int(input_dict['integer1'])+int(input_dict['integer2']) return output_dict def object_viewer(input_dict): return {} def table_viewer(input_dict): return {} def subtract_integers(input_dict): output_dict = {} output_dict['integer'] = int(input_dict['integer1'])-int(input_dict['integer2']) return output_dict def create_range(input_dict): output_dict = {} output_dict['rangeout'] = range(int(input_dict['n_range'])) return output_dict def select_attrs(input_dict): return input_dict def select_attrs_post(postdata, input_dict, output_dict): import Orange data = Orange.data.Table(input_dict['data']) new_attrs = [] for name in postdata['attrs']: new_attrs.append(str(name)) try: new_attrs.append(str(postdata['ca'][0])) class_attr = True except: class_attr = False new_domain = Orange.data.Domain(new_attrs, class_attr, data.domain) try: for meta in postdata['ma']: if data.domain.has_meta(str(meta)): new_domain.addmeta(Orange.feature.Descriptor.new_meta_id(), data.domain.getmeta(str(meta))) else: new_domain.add_meta(Orange.feature.Descriptor.new_meta_id(), data.domain[str(meta)]) except: pass new_data = Orange.data.Table(new_domain, data) output_dict = {'data':new_data} return output_dict def select_data(input_dict): return input_dict def build_filter(val, attr, data): import Orange pos = 0 try: pos = data.domain.meta_id(attr) except Exception, e: pos = data.domain.variables.index(attr) if val['operator'] == ">": return( Orange.data.filter.ValueFilterContinuous( position = pos, ref = float(val['values'][0]), oper = Orange.data.filter.ValueFilter.Greater ) ) elif val['operator'] == "<": return( Orange.data.filter.ValueFilterContinuous( position = pos, ref = float(val['values'][0]), oper = Orange.data.filter.ValueFilter.Less ) ) elif val['operator'] == "=": return( Orange.data.filter.ValueFilterContinuous( position = pos, ref = float(val['values'][0]), oper = Orange.data.filter.ValueFilter.Equal ) ) elif val['operator'] == "<=": return( Orange.data.filter.ValueFilterContinuous( position = pos, ref = float(val['values'][0]), oper = Orange.data.filter.ValueFilter.LessEqual ) ) elif val['operator'] == ">=": return( Orange.data.filter.ValueFilterContinuous( position = pos, ref = float(val['values'][0]), oper = Orange.data.filter.ValueFilter.GreaterEqual ) ) elif val['operator'] == "between": return( Orange.data.filter.ValueFilterContinuous( position = pos, min = float(val['values'][0]), max = float(val['values'][1]), oper = Orange.data.filter.ValueFilter.Between ) ) elif val['operator'] == "outside": return( Orange.data.filter.ValueFilterContinuous( position = pos, min = float(val['values'][0]), max = float(val['values'][1]), oper = Orange.data.filter.ValueFilter.Outside ) ) elif val['operator'] in ["equals", "in"]: vals=[] for v in val['values']: vals.append(Orange.data.Value(attr, str(v))) return( Orange.data.filter.ValueFilterDiscrete( position = pos, values=vals ) ) elif val['operator'] == "s<": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.Less, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "s>": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.Greater, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "s=": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.Equal, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "s<=": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.LessEqual, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "s>=": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.GreaterEqual, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "sbetween": return( Orange.data.filter.ValueFilterString( position = pos, min = str(val['values'][0]), max = str(val['values'][1]), oper = Orange.data.filter.ValueFilter.Between, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "soutside": return( Orange.data.filter.ValueFilterString( position = pos, min = str(val['values'][0]), max = str(val['values'][1]), oper = Orange.data.filter.ValueFilter.Outside, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "scontains": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.Contains, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "snot contains": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.NotContains, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "sbegins with": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.BeginsWith, case_sensitive = bool(val['case']) ) ) elif val['operator'] == "sends with": return( Orange.data.filter.ValueFilterString( position = pos, ref = str(val['values'][0]), oper = Orange.data.filter.ValueFilter.EndsWith, case_sensitive = bool(val['case']) ) ) def select_data_post(postdata, input_dict, output_dict): import Orange, json data = input_dict['data'] try: conditions = json.loads(str(postdata['conditions'][0])) for c in conditions['conditions']: if c['condition'][0]['operator'] in ["is defined", "sis defined"]: # if the operator is "is defined" fil = Orange.data.filter.IsDefined(domain = data.domain) for v in range(len(data.domain.variables)): fil.check[int(v)] = 0 if c['negate']: fil.negate = True fil.check[str(c['condition'][0]['attr'])] = 1 else: fil = Orange.data.filter.Values() fil.domain = data.domain if c['negate']: fil.negate = True if len(c['condition']) > 1: fil.conjunction = False for val in c['condition']: attr = data.domain[str(val['attr'])] fil.conditions.append(build_filter(val, attr, data)) else: for val in c['condition']: attr = data.domain[str(val['attr'])] fil.conditions.append(build_filter(val, attr, data)) data = fil(data) except Exception, e: pass output_dict = {'data': data} return output_dict def build_classifier(input_dict): learner = input_dict['learner'] data = input_dict['data'] classifier = learner(data) output_dict = {'classifier': classifier} return output_dict def apply_classifier(input_dict): import Orange classifier = input_dict['classifier'] data = input_dict['data'] new_domain = Orange.data.Domain(data.domain, classifier(data[0]).variable) new_domain.add_metas(data.domain.get_metas()) new_data = Orange.data.Table(new_domain, data) for i in range(len(data)): c = classifier(data[i]) new_data[i][c.variable.name] = c output_dict = {'data':new_data} return output_dict # ORANGE CLASSIFIERS (update imports when switching to new orange version) def bayes(input_dict): import orange output_dict = {} output_dict['bayesout']= orange.BayesLearner(name = "Naive Bayes (Orange)", hovername="DOlgo ime bajesa") return output_dict def knn(input_dict): import orange output_dict = {} output_dict['knnout']= orange.kNNLearner(name = "kNN (Orange)") return output_dict def rules(input_dict): import orange output_dict = {} output_dict['rulesout']= orange.RuleLearner(name = "Rule Learner (Orange)") return output_dict def cn2(input_dict): import orngCN2 output_dict = {} output_dict['cn2out']= orngCN2.CN2Learner(name = "CN2 Learner (Orange)") return output_dict def svm(input_dict): import orngSVM output_dict = {} output_dict['svmout']= orngSVM.SVMLearner(name = 'SVM (Orange)') return output_dict def svmeasy(input_dict): import orngSVM output_dict = {} output_dict['svmeasyout']= orngSVM.SVMLearnerEasy(name='SVMEasy (Orange)') return output_dict def class_tree(input_dict): import orange output_dict = {} output_dict['treeout']= orange.TreeLearner(name = "Classification Tree (Orange)") return output_dict def c45_tree(input_dict): import orange output_dict = {} output_dict['c45out']= orange.C45Learner(name = "C4.5 Tree (Orange)") return output_dict def logreg(input_dict): import orange output_dict = {} output_dict['logregout']= orange.LogRegLearner(name = "Logistic Regression (Orange)") return output_dict def majority_learner(input_dict): import orange output_dict = {} output_dict['majorout']= orange.MajorityLearner(name = "Majority Classifier (Orange)") return output_dict def lookup_learner(input_dict): import orange output_dict = {} output_dict['lookupout']= orange.LookupLearner(name = "Lookup Classifier (Orange)") return output_dict def random_forest(input_dict): from workflows.helpers import UnpicklableObject output_dict = {} rfout = UnpicklableObject("orngEnsemble.RandomForestLearner(trees="+input_dict['n']+", name='RF"+str(input_dict['n'])+" (Orange)')") rfout.addimport("import orngEnsemble") output_dict['rfout']=rfout return output_dict # HARF (HIGH AGREEMENT RANDOM FOREST) def harf(input_dict): #import orngRF_HARF from workflows.helpers import UnpicklableObject agrLevel = input_dict['agr_level'] #data = input_dict['data'] harfout = UnpicklableObject("orngRF_HARF.HARFLearner(agrLevel ="+agrLevel+", name='HARF-"+str(agrLevel)+"')") harfout.addimport("import orngRF_HARF") #harfLearner = orngRF_HARF.HARFLearner(agrLevel = agrLevel, name = "_HARF-"+agrLevel+"_") output_dict = {} output_dict['harfout']= harfout return output_dict # CLASSIFICATION NOISE FILTER def classification_filter(input_dict, widget): import noiseAlgorithms4lib output_dict = {} output_dict['noise_dict']= noiseAlgorithms4lib.cfdecide(input_dict, widget) return output_dict def send_filename(input_dict): output_dict = {} output_dict['filename']=input_dict['fileloc'].strip('\"').replace('\\', '\\\\') return output_dict def load_dataset(input_dict): import orange output_dict = {} output_dict['dataset'] = orange.ExampleTable(input_dict['file']) return output_dict # SATURATION NOISE FILTER def saturation_filter(input_dict, widget): import noiseAlgorithms4lib output_dict = {} output_dict['noise_dict']= noiseAlgorithms4lib.saturation_type(input_dict, widget) return output_dict # ENSEMBLE def ensemble(input_dict): import math ens = {} data_inds = input_dict['data_inds'] ens_type = input_dict['ens_type'] # TODO ens_level = input_dict['ens_level'] for item in data_inds: #det_by = item['detected_by'] for i in item['inds']: if not ens.has_key(i): ens[i] = 1 else: ens[i] += 1 ens_out = {} ens_out['name'] = input_dict['ens_name'] ens_out['inds'] = [] n_algs = len(data_inds) print ens_type if ens_type == "consensus": ens_out['inds'] = sorted([x[0] for x in ens.items() if x[1] == n_algs]) else: # majority ens_out['inds'] = sorted([x[0] for x in ens.items() if x[1] >= math.floor(n_algs/2+1)]) output_dict = {} output_dict['ens_out'] = ens_out return output_dict # NOISE RANK def noiserank(input_dict): allnoise = {} data = input_dict['data'] for item in input_dict['noise']: det_by = item['name'] for i in item['inds']: if not allnoise.has_key(i): allnoise[i] = {} allnoise[i]['id'] = i allnoise[i]['class'] = data[int(i)].getclass().value allnoise[i]['by'] = [] allnoise[i]['by'].append(det_by) print allnoise[i]['by'] from operator import itemgetter outallnoise = sorted(allnoise.values(), key=itemgetter('id')) outallnoise.sort(compareNoisyExamples) output_dict = {} output_dict['allnoise'] = outallnoise output_dict['selection'] = {} return output_dict def compareNoisyExamples(item1, item2): len1 = len(item1["by"]) len2 = len(item2["by"]) if len1 > len2: # reversed, want to have decreasing order return -1 elif len1 < len2: # reversed, want to have decreasing order return 1 else: return 0 def noiserank_select(postdata,input_dict, output_dict): try: outselection = postdata['selected'] data = input_dict['data'] selection = [0]*len(data) for i in outselection: selection[int(i)] = 1 outdata = data.select(selection, 1) output_dict['selection'] = outdata if outdata != None else None except KeyError: output_dict['selection'] = None #output_dict['selection'] = outselection if outselection != None else None return output_dict # EVALUATION OF NOISE DETECTION PERFORMANCE def add_class_noise(input_dict): import noiseAlgorithms4lib output_dict = noiseAlgorithms4lib.insertNoise(input_dict) return output_dict def aggr_results(input_dict): output_dict = {} output_dict['aggr_dict'] = { 'positives' : input_dict['pos_inds'], 'by_alg': input_dict['detected_inds']} return output_dict def eval_batch(input_dict): alg_perfs = input_dict['perfs'] beta = float(input_dict['beta']) performances = [] for exper in alg_perfs: noise = exper['positives'] nds = exper['by_alg'] performance = [] for nd in nds: nd_alg = nd['name'] det_noise = nd['inds'] inboth = set(noise).intersection(set(det_noise)) recall = len(inboth)*1.0/len(noise) if len(noise) > 0 else 0 precision = len(inboth)*1.0/len(det_noise) if len(det_noise) > 0 else 0 print beta, recall, precision if precision == 0 and recall == 0: fscore = 0 else: fscore = (1+beta**2)*precision*recall/((beta**2)*precision + recall) performance.append({'name':nd_alg, 'recall': recall, 'precision' : precision, 'fscore' : fscore, 'fbeta': beta}) performances.append(performance) output_dict = {} output_dict['perf_results'] = performances return output_dict def eval_noise_detection(input_dict): noise = input_dict['noisy_inds'] nds = input_dict['detected_noise'] performance = [] for nd in nds: nd_alg = nd['name'] det_noise = nd['inds'] inboth = set(noise).intersection(set(det_noise)) recall = len(inboth)*1.0/len(noise) if len(noise) > 0 else 0 precision = len(inboth)*1.0/len(det_noise) if len(det_noise) > 0 else 0 beta = float(input_dict['f_beta']) print beta, recall, precision if precision == 0 and recall == 0: fscore = 0 else: fscore = (1+beta**2)*precision*recall/((beta**2)*precision + recall) performance.append({'name':nd_alg, 'recall': recall, 'precision' : precision, 'fscore' : fscore, 'fbeta': beta}) from operator import itemgetter output_dict = {} output_dict['nd_eval'] = sorted(performance, key=itemgetter('name')) return output_dict def avrg_std(input_dict): perf_results = input_dict['perf_results'] stats = {} # Aggregate performance results n = len(perf_results) for i in range(n): for item in perf_results[i]: alg = item['name'] if not stats.has_key(alg): stats[alg] = {} stats[alg]['precisions'] = [item['precision']] stats[alg]['recalls'] = [item['recall']] stats[alg]['fscores'] = [item['fscore']] stats[alg]['fbeta'] = item['fbeta'] else: stats[alg]['precisions'].append(item['precision']) stats[alg]['recalls'].append(item['recall']) stats[alg]['fscores'].append(item['fscore']) # if last experiment: compute averages if i == n-1: stats[alg]['avrg_pr'] = reduce(lambda x,y: x+y, stats[alg]['precisions'])/n stats[alg]['avrg_re'] = reduce(lambda x,y: x+y, stats[alg]['recalls'])/n stats[alg]['avrg_fs'] = reduce(lambda x,y: x+y, stats[alg]['fscores'])/n # Compute Standard Deviations import numpy avrgstdout = [] print stats for alg, stat in stats.items(): avrgstdout.append({'name': alg, 'precision': stat['avrg_pr'], 'recall': stat['avrg_re'], 'fscore' : stat['avrg_fs'], 'fbeta' : stat['fbeta'], 'std_pr' : numpy.std(stat['precisions']), 'std_re' : numpy.std(stat['recalls']), 'std_fs' : numpy.std(stat['fscores']) }) from operator import itemgetter output_dict = {} output_dict['avrg_w_std'] = sorted(avrgstdout, key=itemgetter('name')) return output_dict # VISUALIZATIONS def pr_space(input_dict): return {} def eval_bar_chart(input_dict): return {} def eval_to_table(input_dict): return {} def data_table(input_dict): return {} def data_info(input_dict): return {} def sdmsegs(input_dict): return{} def definition_sentences(input_dict): return {} def term_candidates(input_dict): return {} # FILE LOADING def uci_to_odt(input_dict): 793 from mothra.settings import FILES_FOLDER 794 795 796 797 798 799 800 801 802 803 804 import orange output_dict = {} output_dict['data'] = orange.ExampleTable(FILES_FOLDER+"uci-datasets/"+input_dict['filename']) return output_dict def odt_to_arff(input_dict): from noiseAlgorithms4lib import toARFFstring output_dict = {} f = toARFFstring(input_dict['odt']) output_dict['arff'] = f.getvalue() return output_dict 805 806 807 def string_to_file(input_dict): return {} 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 def rss_reader(input_dict,widget,stream): import feedparser from streams.models import StreamWidgetData feed = feedparser.parse(input_dict['url']) output_dict = {} if stream is None: output_dict['url'] = feed['items'][0]['link'] else: try: swd = StreamWidgetData.objects.get(stream=stream,widget=widget) data = swd.value except: swd = StreamWidgetData() swd.stream = stream swd.widget = widget data = [] swd.value = data swd.save() feed_length = len(feed['items']) feed['items'].reverse() for item in feed['items']: if item['id'] not in data: data.append(item['id']) swd.value = data swd.save() output_dict['url'] = item['link'] break 838 839 else: raise Exception("Halting stream.") 840 841 return output_dict ======= 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 def alter_table(input_dict): return {'altered_data' : None} def alter_table_finished(postdata, input_dict, output_dict): import Orange from Orange.feature import Type from visualization_views import orng_table_to_dict widget_id = postdata['widget_id'][0] # Parse the changes altered_cells = json.loads(postdata['alteredCells'+widget_id][0]) new_table = Orange.data.Table(input_dict['data']) for cell, new_value in altered_cells.items(): tokens = cell.split('_') inst_idx, att = int(tokens[1]), str(tokens[2]) if new_table[inst_idx][att].var_type == Type.Continuous: new_table[inst_idx][att] = float(new_value) else: # Discrete or string # TODO: 860 861 862 863 # This raises an exception if new_value is not among the legal values for the discrete attribute # - add a dropdown list of legal values when editing the table! try: new_table[inst_idx][att] = str(new_value) 864 except: # Catch orange exception and give a proper error message. 865 raise Exception("Illegal value '%s' for discrete attribute '%s', legal values are: %s." % (new_value, att, new_table.domain[att].values)) 866 return {'altered_data' : new_table} 867
软硬件环境 视频看这里 此处是youtube的播放链接,需要科学上网。喜欢我的视频,请记得订阅我的频道,打开旁边的小铃铛,点赞并分享,感谢您的支持。 简介 前面我们讲到flask路由的时候,可以通过app.route来指定HTTP的请求方法(GET、POST、PUT、DELETE等),并在请求函数中根据不同的请求方法,执行不同的业务逻辑。这样就已经实现一个简单的Restful请求了。但是在flask中有更好的方法来实现,那就是flask-restful扩展了。 RESTful架构风格规定,数据的元操作,即CRUD(即数据的增删查改)操作,分别对应于HTTP方法,GET用来获取资源,POST用来新建资源(也可以用于更新资源),PUT用来更新资源,DELETE用来删除资源,这样就统一了数据操作的接口,仅仅通过HTTP方法,就可以完成对数据的增删查改工作。 安装flask-restful 常规操作,通过pip安装 pip install flask-restful flask-restful基本使用 插件安装好后,就可以导入模块了,看下面的示例 from flask import Flask, jsonify from flask_restful import Api, Resource, reqparse USERS = [ {"name": "zhangsan"}, {"name": "lisi"}, {"name": "wangwu"}, {"name": "zhaoliu"} ] class Users(Resource): def get(self): return jsonify(USERS) def post(self): args = reqparse.RequestParser() \ .add_argument('name', type=str, location='json', required=True, help="名字不能为空") \ .parse_args() if args['name'] not in USERS: USERS.append({"name": args['name']}) return jsonify(USERS) def delete(self): USERS = [] return jsonify(USERS) app = Flask(__name__) api = Api(app, default_mediatype="application/json") api.add_resource(Users, '/users') app.run(host='0.0.0.0', port=5001, use_reloader=True) flask-restful扩展通过api.add_resource()方法来添加路由,方法的第一个参数是一个类名,该类继承Resource基类,其成员方法定义了不同的HTTP请求方法的逻辑;第二个参数定义了URL路径。在Users类中,我们分别实现了get、post、delete方法,分别对应HTTP的GET、POST、DELETE请求。 另外,flask-restful还提供了argparse,它可以方便地实现对http请求中客户端发送过来的数据进行校验处理,这有点像表单中的验证方法,在实际项目中非常实用。 程序启动以后,我们访问 http://127.0.0.1:5001/users,GET请求时会给出USERS的内容、POST请求时会在USERS中添加一项(如果不存在)并返回USERS更新后的内容。DELETE请求则清空USERS并返回空。 客户端部分,我们使用postman来模拟请求 GET方法中如何获取参数 针对每个用户名,我们写个类,同样继承自Resource,在get方法中,接收参数userid,简单起见,userid定义为该用户名在USERS列表中的索引 class UserId(Resource): def get(self, userid): return jsonify( {"name": USERS[int(userid)].get("name")} ) api.add_resource(UserId, '/user/<userid>') 在api.add_resource()方法中,第二个参数/user/<userid>中的<userid>,就是用户传递过来的参数,这点写法上跟flask路由的写法是一模一样的。程序启动后,访问 http://127.0.0.1:5001/user/0 获取的就是USERS列表中第一个用户的信息 在flask-restful中添加日志 Flask教程(十五)日志 已经提过如何在flask中使用日志功能。在flask-restful中,logger的使用有更优雅的方式,来看示例 import logging.config from flask import Flask, jsonify from flask_restful import Api, Resource, reqparse logging.config.dictConfig( { "version": 1, "disable_existing_loggers": False, "formatters": { "simple": {"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"} }, "handlers": { "console": { "class": "logging.StreamHandler", "level": "DEBUG", "formatter": "simple", "stream": "ext://sys.stdout", }, "info_file_handler": { "class": "logging.handlers.RotatingFileHandler", "level": "INFO", "formatter": "simple", "filename": "info.log", "maxBytes": 10485760, "backupCount": 50, "encoding": "utf8", }, "error_file_handler": { "class": "logging.handlers.RotatingFileHandler", "level": "ERROR", "formatter": "simple", "filename": "errors.log", "maxBytes": 10485760, "backupCount": 20, "encoding": "utf8", }, "debug_file_handler": { "class": "logging.handlers.RotatingFileHandler", "level": "DEBUG", "formatter": "simple", "filename": "debug.log", "maxBytes": 10485760, "backupCount": 50, "encoding": "utf8", }, }, "loggers": { "my_module": {"level": "ERROR", "handlers": ["console"], "propagate": "no"} }, "root": { "level": "DEBUG", "handlers": ["error_file_handler", "debug_file_handler"], }, } ) USERS = [ {"name": "zhangsan"}, {"name": "lisi"}, {"name": "wangwu"}, {"name": "zhaoliu"} ] class Users(Resource): def __init__(self, **kargs): self.logger = kargs.get('logger') def get(self): return jsonify(USERS) def post(self): args = reqparse.RequestParser() \ .add_argument('name', type=str, location='json', required=True, help="名字不能为空") \ .parse_args() self.logger.debug(args) if args['name'] not in USERS: USERS.append({"name": args['name']}) return jsonify(USERS) def delete(self): USERS = [] return jsonify(USERS) app = Flask(__name__) api = Api(app, default_mediatype="application/json") api.add_resource(Users, '/users', resource_class_kwargs={ "logger": logging.getLogger('/Users') }) app.run(host='0.0.0.0', port=5001, use_reloader=True) 我们使用上次用到的dictConfig,主要的区别在于api.add_resource()方法中,使用了参数resource_class_kwargs,然后在Resource子类中的构造函数__init__,将日志记录器获取到,后面就可以在各个处理方法中使用了。再次使用postman发起POST请求,可以看到debug.log是这个样子的
Hi, Im trying to convert a bash script into python in order to refresh the access tokens which are used by Aruba Central for the Rest API. The current bash script is as follows, client_id=123client_secret=123access_token="\Documents\Tokens\access_token.json" REFRESH_TOKEN=$ cat $access_token.json | jq -r '.refresh_token') curl https://eu-apigw.central.arubanetworks.com/oauth2/token -d "client_id=$CLIENT_ID&client_secret=$CLIENT_SECRET&grant_type=refresh_token&refresh_token=$REFRESH_TOKEN" > $access_token Ive managed to convert the curl into the following python, but I am not sure if this is correct or not. data = {'client_id': '$CLIENT_ID','client_secret': '$CLIENT_SECRET','grant_type': 'refresh_token','refresh_token': '$REFRESH_TOKEN'} response = requests.post('https://eu-apigw.central.arubanetworks.com/oauth2/token?client_id=$CLIENT_ID&client_secret=$CLIENT_SECRET&grant_type=refresh_token&refresh_token=$REFRESH_TOKEN', data=data) The aim for this is to have the script send out a request for a refresh of the token by inputting the Client ID, Client Secret, and Refresh Token (found in the access_token.json file) into the URL, sending the request, and putting the data it receives back into the access_token.json file. This script worked on a Linux environment, but it needs to be converted to work in windows and I am struggling to do so, so any help would be much appreciated. Hi James, For your reference please find an exmaple below: ******************************************************************* import requests url = "https://eu-apigw.central.arubanetworks.com/oauth2/token" qparams = {"client_id": "$CLIENT_ID", "client_secret": "$CLIENT_SECRET", "grant_type":"refresh_token", "refresh_token":"$REFRESH_TOKEN"} response = requests.request("POST", url, params=qparams) print(response.text.encode('utf8'))*******************************************************************The above example works fine for me. Let me know if I can help you with anything else. Regards, Jay Hi Jay, Thank you for getting back to me regarding this. Ive put in the code you have attached but I am still struggling to get it to refresh for some reason. Im not sure if it is not working as a result of me trying to copy the refresh_token from another file or not? As well as this, how would I go about copying the new token data from the new request into that same access_token.json file if it is at all possible? I hope this all makes sense? I appreciate the help so far, thank you. Ive attached the new code below: client_id = XXXclient_secret = XXXrefresh_token= open('C:\Users\james.weston1\Desktop\access_token.json') print(response.text.encode('utf8')) In addition to Jay's response, You could refer the following python sample. It has the following functions Git Repo: https://github.com/aruba/central-examples-only/tree/master/rest-api-python-scripts File in discussion: https://github.com/aruba/central-examples-only/blob/master/rest-api-python-scripts/central_session.py Hope this helps! Karthik Hi Both, Ive tried both solutions but im still failing to generate a new token each time I run the python script. Jay, I used your response and edited the code as follows client_id = 123client_secret = 123refresh_token= open('C:\Users\james.weston1\Desktop\access_token.json') Im not sure if it is not working as a result of me trying to copy the refresh_token from another file or not. Im also unsure if this code is saving the new token to the acces_token.json file or not where im trying to retrieve it from if this makes sense? Karthik, Ive also downloaded the Git you have linked and the input_info.json file. Ive inputted my credentials into input_info and when looking through central_session it didnt appear like I needed to change anything so I left that as it was. However, when trying to run this it also fails to refresh the token too and just closes. Im not sure if there is anything else I need to be editing? Im checking the refresh by downloading my token and seeing if it changes so I hope this is the correct process to check? Many thanks to you both for looking into this for me. If you had executed the script based on the readme section with this command, python3 central_site_bringup.py -i=input_info.json it would have created a file under script directory, temp/tok_<customer_id>.pickle. If that file was already there, it would have refreshed the token in the file. It will not be displayed in the screen as output though. You could also check by downloading the token in the UI. The purpose of the file is to act as a sample for creating groups, sites and templates (this piece of code is commented out in the script). You could do more based on other files in the repo. It turns out I didnt install the requirements so thats my apologies. However, im struggling with this part too im afraid. When using the command pip3 install -r requirements.txt in cmd/python it just spits out errors of invalid syntax etc. Alongside this, when I get it working, do I need to execute the central_site_bringup.py as if possible id like not to create a new site. I know it deletes it straight after but id like to try and avoid this either way if possible just incase. Sorry for all the trouble. Many thanks. You could install packages mentioned in the requirements.txt by yourself as well. I prefer, creating a virtual environment. You could directly make a call to the functions available in central_session.py. In case, using the central_site_bringup.py also works. It does not create/delete site since all the code is commented out for reference purpose. Ive managed to install the packages from the requirements.txt file. Im now looking to execute to see if it works. As per your comment, am I right in saying then that executing central_site_bringup will not create anything on Central at all? It is simply just there as a test in order to refresh the token? I am just a bit concerned of it creating sites etc. that I do not need to create. As well as that, you mentioned just calling the central_session.py. By doing this, would it just run the refresh script and store it in the file that is mentioned? I presume I would go about this by doing "python3 central_session.py -i=input_info.json". Only reason I ask is that this script is only to be used to refresh the token so that it can be used with other scripts to pull Rest API Data down for a dashboard. Not sure where you're at with this issue but I have a few remarks that might help. First of all, I think you've got it wrong on this line: refresh_token = open('C:\Users\james.weston1\Desktop\access_token.json') This returns a file pointer, not a string. Moreover, if the file contents is actually JSON, you would have to parse it to retrieve the token value. So assuming the file contents looks like this: {"refresh_token": "123456"} Your script should look more like this: import requests import json client_id = 123 client_secret = 123 url = "https://eu-apigw.central.arubanetworks.com/oauth2/token" with open('C:\Users\james.weston1\Desktop\access_token.json') as f: # Parse the file contents as json access_data = json.load(f) # Get the refresh token from the resulting dict refresh_token = access_data['refresh_token'] qparams = { "grant_type":"refresh_token", "client_id": client_id, "client_secret": client_secret, "refresh_token": refresh_token } response = requests.request("POST", url, params=qparams) print(response.text.encode('utf8')) Then if you want to store the new refresh token into the same file, you would do something like this (in the same script): (...) response = requests.request("POST", url, params=qparams) # extract the new refresh oken from the response new_refresh_token = response.json()['resfresh_token'] # Update the access data you previously got from your json file access_data['refresh_token'] = new_refresh_token # Write the data back to your json file with open('your/file.json', 'w') as f: json.dump(access_data, f) Hope it helps. At Aruba, we believe that the most dynamic customer experiences happen at the Edge. Our mission is to deliver innovative solutions that harness data at the Edge to drive powerful business outcomes. © Copyright 2021 Hewlett Packard Enterprise Development LPAll Rights Reserved.
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/ Question on making an observer to track individual instrument values Hello, I am learning backtrader and really like it. I want to make an observer to track the values of individual instrument in the portfolio, say 5 different futures. I am thinking to have one line for each instrument, but how should I define the lines beforehand since the number of instruments could be different? I look at the examples especially the trades observer, but it seems like they all have a fixed number of lines defined. The trades observer has pnlplus/pnlminus lines defined, and use different markers to track individual instrument. Am I missing something obvious here? Can I initialize the lines tuple with number of instruments as parameter? Sorry for the simple question, I am still new to python. Thank you. run-outlast edited by Try starting your journey with this analyzer, PositionsValue, which works over any number of datas and returns the position value at each bar along with the cash position I believe. Thank you for the pointer. The PositionsValue analyzer does what I want. May I ask a further question? I know I can plot the result of this analyzer, but is it possible to make it an observer just like the Datatrades observer? This way it can be plotted directly using cerebro.plot() or Bokeh. My main issue is how to define class variable "lines" dynamically, since its numbers/names depends on how many instruments in the datas. I see in the datatrades observer, it is produced by the metaclass MetaDataTrades, and the lines are created there dynamically. I want to follow the same pattern. I am trying to single-step into DataTrades and see what's being created at runtime. But somehow even if I set all the breakpoints in DataTrades (using VScode), and added the observer into cerebro, they never get triggered. The observer is generating correct plots, so it is working for sure. The debug mode works fine in other parts of code, like single-step in strategy/indicators. I guess I must be missing something here regarding how the datatrades class is being generated. class MetaDataTrades(Observer.__class__): def donew(cls, *args, **kwargs): _obj, args, kwargs = super(MetaDataTrades, cls).donew(*args, **kwargs) # Recreate the lines dynamically if _obj.params.usenames: lnames = tuple(x._name for x in _obj.datas) else: lnames = tuple('data{}'.format(x) for x in range(len(_obj.datas))) # Generate a new lines class linescls = cls.lines._derive(uuid.uuid4().hex, lnames, 0, ()) # Instantiate lines _obj.lines = linescls() ...... class DataTrades(with_metaclass(MetaDataTrades, Observer)): ok, just figured it out. I can reuse the DataTrades observer. No change needed for the MetaDataTrades class. It will generate multiple lines for the assets. The author sure constructed backtrader to be versatile! Here is my code to show pnl in individual asset. Just accumulate daily settled pnl for each asset. Haven't checked the exact numbers yet. Eyeballing it looks alright comparing to the total account pnl. class Aseet_monitor(bt.utils.py3.with_metaclass(MetaDataTrades, bt.observer.Observer)): _stclock = True params = (('usenames', True),) plotinfo = dict(plot=True, subplot=True, plothlines=[0.0], plotymargin=0.10, plotlinelabels=True) plotlines = dict() def next(self): strat = self._owner for inst in strat.datas: pos = strat.broker.positions[inst] cur_line = getattr(self.lines, inst._name) comminfo = strat.broker.getcommissioninfo(inst) if len(self) == 1: cur_line[0] = 0 else: if pos.size != 0: cur_pnl = comminfo.profitandloss(pos.size, inst.close[-1], inst.close[0]) else: cur_pnl = 0 cur_line[0] = cur_line[-1] + cur_pnl Here is the output. The whole strategy is carried by HG_copper, while CL and ZN are underwater almost the whole time. Comparing to the Value observer.
В этом уроке будет разбираться тема цикла for и операций, которые можно провести с ним. На этой страницы представлены возможные решения модуля 7.1 курса «Поколение Python: курс для начинающих» для самопроверки. Python is awesome Напишите программу, которая выводит слова «Python is awesome!» (без кавычек) 10 раз. Формат входных данных Формат выходных данных Программа должна вывести 10 раз текст «Python is awesome!», каждый на отдельной строке. print('Python awesome!\n' * 10) Повторяй за мной 1 Дано предложение и количество раз которое его надо повторить. Напишите программу, которая повторяет данное предложение нужное количество раз. Формат входных данных В первой строке записано текстовое предложение, во второй — количество повторений. Формат выходных данных Программа должна вывести указанное текстовое предложение нужное количество раз. Каждое повторение должно начинаться с новой строки. a = input() b = int(input()) for i in range(b): print(a) Последовательность символов Напишите программу, которая использует ровно три цикла forдля печати следующей последовательности символов: AAA AAA AAA AAA AAA AAA BBBB BBBB BBBB BBBB BBBB E TTTTT TTTTT TTTTT TTTTT TTTTT TTTTT TTTTT TTTTT TTTTT G Формат входных данных Формат выходных данных Программа должна вывести указанную последовательность символов. for i in range(6): print('A' * 3) for i in range(5): print('B' * 4) print('E') for i in range(9): print('T' * 5) print('G') Звездный прямоугольник На вход программе подается натуральное число n n. Напишите программу, которая печатает звездный прямоугольник размерами n \times 19 n×19. Формат входных данных На вход программе подаётся натуральное число n \in [1; \, 20]n∈[1;20] — высота звездного прямоугольника. Формат выходных данных Программа должна вывести звездный прямоугольник размерами n \times 19n×19. Подсказка.Для печати звездной линии используйте умножение строки на число. n = int(input()) for i in range(n): print('*' * 19) Повторяй за мной 2 Напишите программу, которая считывает одну строку текста и выводит 10 строк, пронумерованных от 0 до 9, каждая с указанной строкой текста. Формат входных данных На вход программе подается одна строка текста. Формат выходных данных Программа должна вывести десять строк в соответствии с условием задачи. a = input() for b in range(10): print(b, a) Квадрат числа На вход программе подается натуральное число n n. Напишите программу, которая для каждого из чисел от 00 до nn(включительно) выводит фразу: «Квадрат числа [число] равен [число]» (без кавычек). Формат входных данных На вход программе подается натуральное число nn. Формат выходных данных Программа должна вывести текст в соответствии с условием задачи. n=int(input()) for i in range(n+1): print('Квадрат числа', i, 'равен', i**2) Звездный треугольник На вход программе подается натуральное число n \, (n \ge 2) n(n≥2) – катет прямоугольного равнобедренного треугольника. Напишите программу, которая выводит звездный треугольник в соответствии с примером. Формат входных данных На вход программе подается одно натуральное число n \, (n \ge 2)n(n≥2). Формат выходных данных Программа должна вывести треугольник в соответствии с условием задачи. g = int(input()) for f in range(g, 0, -1): print("*" * f)
I need a way to call Python code from Swift on an Apple platform. A library would be ideal. I've done a considerable amount of Google searching, and the closest material I found is for Objective-C. In swift 5 you can try PythonKit framework. Here's example of the usage: import PythonKit let sys = try Python.import("sys") print("Python \(sys.version_info.major).\(sys.version_info.minor)") print("Python Version: \(sys.version)") print("Python Encoding: \(sys.getdefaultencoding().upper())") I found this excellent and up to date gist that walks you through a complete solution: https://github.com/ndevenish/Site-ndevenish/blob/master/_posts/2017-04-11-using-python-with-swift-3.markdown If you can get away with just using NSTask to launch a Python process, that's a pretty good option too. In Swift 4.2 there was an approved feature to allow dynamic languages to be ported directly into swift Will look similar to: // import pickle let pickle = Python.import("pickle") // file = open(filename) let file = Python.open(filename) // blob = file.read() let blob = file.read() // result = pickle.loads(blob) let result = pickle.loads(blob) If anyone is ever interested in calling python from swift, here is some helpful material I found: U the python framework - https://developer.apple.com/library/ios/technotes/tn2328/_index.html PyObjC (a little more challenging) - Cobbal - https://github.com/cobbal/python-for-iphone Python docs (you would need to make C-Swift bridge) Most of it is for Objective-c, but if you need to use swift you can easily just create an ObjC-Swift bridge (super-super easy) - Lookup the apple docs
I have created a tool in ArcGIS based on a python script that will clip a large raster dataset into smaller tiles based on a fishnet polygon feature class. The script iterates through each feature in the fishnet and uses the selected feature to clip the raster. For some reason it keeps stopping after the 58th feature (there are 89 total). I tried using it on a different raster but with all the same parameters and it stopped after the 53rd feature. Any idea why this is happening? Here is the pertinent code for feat in fishnet: arcpy.SelectLayerByAttribute_management("fishnetlayer", "NEW_SELECTION", '"FID" = ' + str(select)) arcpy.Clip_management(inRaster, "#", "BE_Seg_" + str(tile) + ".png", "fishnetlayer", "0", "ClippingGeometry") arcpy.AddMessage("Tile " + str(tile) + " successfully created") rastnodata = arcpy.GetRasterProperties_management("BE_Seg_" + str(tile) + ".png", "ALLNODATA") rastempty = rastnodata.getOutput(0) arcpy.AddMessage(rastempty) if rastempty == "1": arcpy.Delete_management("BE_Seg_" + str(tile) + ".png") select = select + 1 tile = tile + 1 else: select = select + 1 tile = tile + 1
how to make a shop in python starting welcome to my tutorial today i am going to show you the following functions that are needed in making a shop: print("") input() if for else: so how to start so for this to work what you want to use is a print statment to welcome people to the shop just like this: print("welcome to my shop would you like to buy anything?") by doing that the program will allow you to print whats inside your quotations when you run it. cash so in order for this to actually work you need your money to be called by using something like gold cash coins etc so under that print statement from earlier type this right under it: gold = 100 that 100 is telling you how much gold you have to spend inputs so when you ask that print statement it wants you to answer the question so for that to happen do this under gold: answer = input() this will allow the user to input an answer of yes or no. then after that what you want to put it this: if answer == "yes": print("would you like to buy a shield or a sword?") so far your code should look like this if you did it correctly: print("welcome to my shop would you like to buy anything") gold = 100 answer = input() if answer == "yes": print("would you like to buy a shield or a sword?") inputs for shield or sword so next up is more input statements so under the above code what u need to do is this: gear = input() if gear == "sword": if gold >= 10: print("you have purchased a sword!") by doing that it allows yu to buy a sword but where it says if gold>= 10: means that if you have less than 10 gold you cant buy it but if u have more than 10 u can buy it. removing gold so after that when u purchased the sword you want to make it take away the gold u spent on the sword so to do that you want to do this: gold -= 10 print("you now have", + gold) this will take away gold from your balance and allow you to see how much u have left. now if you dont have enough gold you want to do this: else: print("you dont have enough gold") so now for the shiled you basically want to do the same exact thing: gear = input() if gear == "shield": if gold >= 10: print("you have purchased a shield!") gold -= 10 print("you now have", + gold) else: print("you dont have enough gold") that should be all you would need and your final code should look like this: print("welcome to my shop would you like to buy anything") gold = 100 answer = input() if answer == "yes": print("would you like to buy a shield or a sword?") gear = input() if gear == "sword": if gold >= 10: print("you have purchased a sword!") gold -= 10 print("you now have", + gold) else: print("you dont have enough gold") gear = input() if gear == "shield": if gold >= 10: print("you have purchased a shield!") gold -= 10 print("you now have", + gold) else: print("you dont have enough gold") and yeah after that enjoy my tutorial
当当当,我又开新坑了,这次的专题是Python机器学习中一个非常重要的工具包,也就是大名鼎鼎的numpy。 所以今天的文章是Numpy专题的第一篇。 俗话说得好,机器学习要想玩的溜,你可以不会写Python,但一定不能不会调库(大雾)。Numpy可以说是Python中最基础也是最重要的工具库了,要用Python做机器学习,玩转各种框架,Numpy是必须要会的。像是TensorFlow、pytorch这些知名框架都是基于Numpy进行计算的,可想而知它的重要性。 网上关于Numpy的介绍非常多,但说来说去无非是一个Python中数值计算的非常重要的基础包,可以用来很方便地做一些矩阵和大数据的运算。 Numpy是做什么的我们很好理解,但是我们可能更加好奇它更深层次的意义究竟是什么?关于这个问题我们从浅到深不停地追问,可以得到许多不同的答案。 最浅层的回答很简单,Numpy很方便,计算速度快,可以很方便地进行矩阵运算。在Andrew的课程当中,他曾经演示过,同样的矩阵运算,如果我们通过Python中的循环实现速度会比调用Numpy慢上至少上百倍。这个差异显然是非常可怕的。 但为什么Numpy会更快呢? 我们追问下去,又会得到一个新的答案。因为Numpy包底层是通过C++实现的,显然C++运算比Python快得多,所以Numpy自然就更快了。 难道Numpy就只是因为C++更快这么简单吗? 这个问题已经超越了Numpy本身,我们需要从Python的特性来回答了。Python是一门解释型语言,也就是说当我们执行Python的时候,其实是执行了一个Python的解释器。由Python的解释器来解释执行Python的每一行代码。 如果我们把解释器理解成虚拟机,把Python执行的代码理解成虚拟机当中的程序。如果我们虚拟机多开的话,是很难保证线程安全的。为了解决这个问题,Python设计了GIL机制,也就是全局解释器锁,它保证了同一时刻最多只有一个解释器线程在执行。 这个机制保证了线程安全,但是也限制了Python多线程的使用。Python的多线程本质上是伪多线程,因为解释器只有一个线程在跑。所以如果我们想要通过多线程并发来加速计算的话,这是不可能的。 而矩阵和向量的一些操作是可以通过多线程并发来加速计算的,而Python本身的特性导致了Python不能执行这样的操作。那么通过Python调用C++实现的计算库也就是唯一的选择了。实际上不仅是Numpy,几乎所有Python的计算库,都是通过Python调用其他语言实现的。Python本身只是最上层的调用方。 理解了这点除了对于Python可以有更加清晰的认识之外,也有助于之后学习TensorFlow等其他框架。 Numpy之所以好用,是因为我们可以通过Numpy很方便地创建高维的数组和矩阵。 举个例子,比如在原生Python当中,当我们需要创建一个二维数组的时候,往往需要些很长的定义。比如我们想要一个10 * 10的数组: arr = [[0 for _ in range(10)] for _ in range(10)] 但是在Numpy当中就会很方便,只需要一行。 import numpy as np arr = np.zeros((10, 10)) 第一行当中我们引入了numpy,为了编码方便,我们将它重新命名成了np。这个是业内惯用做法,几乎所有使用numpy的程序员都会这么重命名。 在numpy当中,存储高维数组的对象叫做ndarray,与之对应的是存储矩阵的mat。其实这两者区别不大,支持矩阵的运算,ndarray基本上也都支持。我们有这么一个印象即可,关于mat内容我们会在之后介绍。 我们创建除了ndarray之后,关于获取ndarray基本信息的api大概有下面四个。 第一个是通过.ndim查看ndarray的维度,也就是查看这是一个几维的数组: 第二个是通过.shape获取这个ndarray在各个维度的大小: 第三个是通过.dtype获取这个ndarray中元素的类型: 最后一个是tolist()方法,可以将一个ndarray转化成Python原生的list进行返回。 那么我们怎么创建numpy中的ndarray呢? 大概也有几种办法,首先,既然numpy中的ndarray可以转换成Python原生的list,同样Python中原生的list也可以转换成numpy中的ndarray。 和转换变量类型的语法很像,我们通过np.array()转换即可。 nums = [1, 3, 4, 6] arr = np.array(nums) 除了通过Python中原生的list转换,我们还可以根据自己的需要创建新的ndarray。numpy创建array的方法有很多,我们先来介绍一下其中比较基础的几种。 创建出一个range np.arange可以生成一个序列,有些类似于Python中原生的range。不过它更加灵活,我们可以之传入一个整数,它会返回一个从0开始的序列: np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) 我们也可以指定首尾元素和间隔,numpy会自动帮我们生成一个等差序列: np.arange(1, 5, 0.5) array([1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5]) 除此之外,numpy中还提供了ones和zeros两个api,可以生成全为0和全为1的元素。 np.zeros((3, 4)) array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) np.ones((2, 3)) array([[1., 1., 1.], [1., 1., 1.]]) 我们还可以使用eye或者是identity生成一个N*N的单位矩阵: np.eye(3) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) 除此之外,还有一个full的api可以指定shape和数值,用我们指定的数值填充出一个指定大小的数组来: np.full((3, 4), 3) array([[3, 3, 3, 3], [3, 3, 3, 3], [3, 3, 3, 3]]) 但是这个api我们用的不多,因为我们可以用ones生成一个全为1的数组,然后乘上一个我们想要的值,就等价于full。 另外,ones, zeros, full这几个api还有一个对应的like方法。所谓的like方法就是我们传入另外一个ndarray代替shape,numpy会根据这个ndarray的形状生成一个对应形状的新array。 我们来看个例子吧,首先我们生成一个顺序的序列: ex1 = np.arange(10) 然后我们通过zeros_like方法生成一个同样大小的全为0的矩阵: ex2 = np.zeros_like(ex1) 它其实等价于: np.zeros(ex1.shape) 其他几个like方法也大同小异,因为可替代性很强,所以我也用的不多。 numpy支持的数据类型很多,除了常用的int和float之外,还支持复数类型的complex,某种程度上来说和golang支持的类型比较接近。 其中int类型一共分为int8,int32,int64和int128,其中每一种又分为带符号的和不带符号的。例如int8就是带符号的8位二进制表示的int,而uint8则是不带符号位的。浮点数没有无符号浮点数,一共分为float16,float32,float64和flaot128。 复数也有三种,分别是complex64,complex128和complex256。除此之外还有string_和object以及unicode_这三种类型。 我们可以通过调用astype方法更改ndarray中所有变量的类型: ex1 = np.arange(10) ex1.astype(np.float64) array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) 除了人为转换之外,我们还可以在创建的时候通过dtype这个参数来表示我们想要创建的数据的类型,这样可以避免之后转换的麻烦。 ex1 = np.arange(10, dtype=np.float32) 这篇文章当中我们不仅介绍了Numpy的创建的方法,还聊了Python这门语言的一些特性。正是因为Python本身多线程的限制,导致它在需要高并发计算的场景下性能很差。才会需要通过Python去调用C++或者是其他语言的底层实现。这也是为什么Python经常被称为胶水语言的原因。 Numpy可以认为是Python进行机器学习的基础,当然除了Numpy之外,像是pandas、matplot以及scikit-learn等库也是必不可少的。我们会从Numpy开始,一点一点把这些常用的库都给大家分享一遍。 各位看官大人,喜欢的话,点个关注吧~
ui.WebView in an extension Correct me if I'm wrong, but does the memory limit while running Python as an appex extension prevent us from loading things with WebView.load_url or WebView.load_html ? I've managed to get a local html file to load, but it will not populate its img tags, images will not load and leave the default blank bar/box ( "<img src="http://url.jpg"> ). The same html file will load perfectly using ui.webview or webbrowser NOT running from an extension (the html doc will work anywhere and everywhere else). import ui def view(text): v = ui.View() wv = ui.WebView() v.add_subview(wv) wv.load_html(text) v.frame = (0,0,320,568) wv.frame = (0,0,320,568) v.present() There's a cleaned up example of the function called to run my html doc as a string, though I've also tried running as a local file. Zed_Oud cook This works for me... Shows the image and formatted text. # coding: utf-8 import ui import appex text = appex.get_text() if text: w = ui.WebView() w.scales_page_to_fit = False w.load_html(text) w.present() Tested this from Drafts: <h1>header</h1> <b>some bold text</b> <img src= "https://lh4.ggpht.com/wKrDLLmmxjfRG2-E-k5L5BUuHWpCOe4lWRF7oVs1Gzdn5e5yvr8fj-ORTlBF43U47yI=w300"></img> Is there something really different that you're trying to do? cook JonB Make sure you have valid html. here is an example showing an image. Note, if you are trying to load a local image, you may need to use the full path in a file:// url(os.path.abspath) import ui text="<img src='http://omz-software.com/pythonista/images/DeviceScreenshots.png'></img>" def view(text): v = ui.View() wv = ui.WebView() v.add_subview(wv) wv.load_html(text) v.frame = (0,0,320,568) wv.frame = (0,0,320,568) v.present() view(text) The HTML I'm using works everywhere, but not loaded through in an extension. That's my whole goal, I am trying to replace webbrowser.open("local file") for use in an extension. I haven't tried to load a local image using my HTML doc, I'll try that out. Here is an example of my HTML when I point it at "http://xkcd.com" <html> <body bgcolor="#000000"> <img src="http://imgs.xkcd.com/static/terrible_small_logo.png" alt="http://imgs.xkcd.com/static/terrible_small_logo.png" ><br><br> <img src="http://imgs.xkcd.com/comics/tire_swing.png" alt="http://imgs.xkcd.com/comics/tire_swing.png" ><br><br> <img src="http://imgs.xkcd.com/store/te-pages-sb.png" alt="http://imgs.xkcd.com/store/te-pages-sb.png" ><br><br> <img src="http://imgs.xkcd.com/s/a899e84.jpg" alt="http://imgs.xkcd.com/s/a899e84.jpg" > </body> </html> Here is my full code (cleaned and formatted, but just as dysfunctional when used as an extension): # coding: utf-8 import appex from urllib2 import urlopen import os, console, requests, urlparse def write_text(name, text, writ='w'): with open(name, writ) as o: o.write(text) def img_page(file_list, link_list=None): if link_list is None: link_list = file_list links = zip(file_list, link_list) x = '<br><br>\n'.join(['<img src="{0}" alt="{1}" >'.format(a,b) for a,b in links]) y = """ <html> <body bgcolor="#000000"> {0} </body> </html> """.format(x) return y def view_doc(text): import ui w = ui.WebView() w.scales_page_to_fit = False w.load_html(text) w.present() def open_file(file_path): import ui file_path = os.path.abspath(file_path) file_path = urlparse.urljoin('file://', os.path.abspath(file_path)) #v = ui.View() #file_path = 'http://xkcd.com' wv = ui.WebView() #v.add_subview(wv) wv.load_url(file_path) #v.frame = (0,0,320,568) #wv.frame = (0,0,320,568) #v.present() wv.present() def view_temp_index(file_url_list): temp_fn = '__temp.html' write_text(temp_fn, img_page(file_url_list)) open_file(temp_fn) def get_Pic_Links_Content(content,url=None): from bs4 import BeautifulSoup as bs if url is None: url = '' # 'http://' s = bs(content) p = s.findAll('img') pics = [] for x in p: y = urlparse.urljoin(url, x['src']) if y not in pics: pics.append(y) return pics def get_Pic_Links(url): r = requests.get(url) #print 'viewing pics from url:', r.url return get_Pic_Links_Content(r.content, url) def pick(url): choice = console.alert('View:','Pick where to view source:','Make File','View Directly','Console') pics = get_Pic_Links(url) if choice == 1: view_temp_index(pics) elif choice == 2: view_doc(img_page(pics)) else: print '\n'.join(pics) def main(): if not appex.is_running_extension(): print '\nRunning using test data...' url = 'http://xkcd.com' else: url = appex.get_url() if url: pick(url) else: print 'No input URL found.' if __name__ == '__main__': main()``` JonB your code works fine for me... this is in the beta. there may have been an issue with webviews in 2.0, I forget... cook @Zed_Oud I now see that I misunderstood your original post. You intended to feed an URL and retrieve images instead of sending HTML text to the script. Anyway, I also had the same problem as you when I ran the script you wrote. I changed the approach a little but with the same results. It seems as though with the next version of Pythonista this isn't an issue. I did test out a few different sites - some actually worked while others showed the same empty boxes. Perhaps there is some caching problem? I do not know! What is the purpose of displaying these files? Do you want to then choose one to save it to the camera roll? Do you want to download them all? I'm just asking because I'm sure there's another way to get what you want in the end!! @cook Thanks for looking at my code. I'm glad to know I hadn't overlooked something silly. As it is, I am trying to make an extension to re-display or mobilize certain kinds of websites. Imagine if I went to a website full of thumbnails, and the script grabbed all of the linked full-size images. There's other websites and whatnot, but I was mostly playing around with extensions for now. Thanks again for the help.
Primero instala nuestra biblioteca con el compositor: $ composer require gender-api/client use GenderApi\Client as GenderApiClient; try { $apiClient = new GenderApiClient('insert your API key'); // Query a single name $lookup = $apiClient->getByFirstName('elisabeth'); if ($lookup->genderFound()) { echo $lookup->getGender(); // female } // Query a full name and improve the result by providing a country code $lookup = $apiClient->getByFirstNameAndLastNameAndCountry('Thomas Johnson', 'US'); if ($lookup->genderFound()) { echo $lookup->getGender(); // male echo $lookup->getFirstName(); // Thomas echo $lookup->getLastName(); // Johnson } } catch (GenderApi\Exception $e) { // Name lookup failed due to a network error or insufficient credits // left. See https://gender-api.com/en/api-docs/error-codes echo 'Exception: ' . $e->getMessage(); } Vea la documentación completa del cliente aquí: https://github.com/markus-perl/gender-api-client function getGender($firstname) { $myKey = 'insert your server key here'; $data = json_decode(file_get_contents( 'https://gender-api.com/get?key=' . $myKey . '&name=' . urlencode($firstname))); return $data->gender; } echo getGender('markus'); //Output: male Primero instala nuestra biblioteca con NPM: $ npm i gender-api.com-client --save import {Client as GenderApiClient, ResultSingleName} from "gender-api.com-client"; const genderApiClient = new GenderApiClient("your API key"); try { genderApiClient.getByFirstName('theresa', (response: ResultSingleName) => { console.log(response.gender); //female console.log(response.accuracy); //98 }); genderApiClient.getByFirstNameAndCountry('john', 'US', (response: ResultSingleName) => { console.log(response.gender); //male console.log(response.accuracy); //99 }); } catch(e) { console.log('Error:', e); } Vea la documentación completa del cliente aquí: https://github.com/markus-perl/gender-api-client-npm Primero instala nuestra biblioteca con NPM: $ npm i gender-api.com-client --save try { var GenderApi = require('gender-api.com-client'); var genderApiClient = new GenderApi.Client('your api key'); genderApiClient.getByFirstName('theresa', function (response) { console.log(response.gender); //female console.log(response.accuracy); //98 }); genderApiClient.getByFirstNameAndCountry('john', 'US', function (response) { console.log(response.gender); //male console.log(response.accuracy); //99 }); } catch(e) { console.log('Error:', e); } Vea la documentación completa del cliente aquí: https://github.com/markus-perl/gender-api-client-npm Python 3.* import json from urllib.request import urlopen myKey = "insert your server key here" url = "https://gender-api.com/get?key=" + myKey + "&name=kevin" response = urlopen(url) decoded = response.read().decode('utf-8') data = json.loads(decoded) print( "Gender: " + data["gender"]); #Gender: male Python 2.* import json import urllib2 myKey = "insert your servery key here" data = json.load(urllib2.urlopen("https://gender-api.com/get?key=" + myKey + "&name=markus")) print "Gender: " + data["gender"]; #Gender: male import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.URL; import com.google.gson.Gson; import com.google.gson.JsonObject; public class Main { public static void main(String[] args) { try { String myKey = "insert your server key here"; URL url = new URL("https://gender-api.com/get?key=" + myKey + "&name=markus"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); if (conn.getResponseCode() != 200) { throw new RuntimeException("Error: " + conn.getResponseCode()); } InputStreamReader input = new InputStreamReader(conn.getInputStream()); BufferedReader reader = new BufferedReader(input); Gson gson = new Gson(); JsonObject json = gson.fromJson(reader, JsonObject.class); String gender = json.get("gender").getAsString(); System.out.println("Gender: " + gender); // Gender: male conn.disconnect(); } catch (IOException e) { e.printStackTrace(); } } } Descargue un proyecto de muestra aquí: Documentación: https://github.com/microknights/Gender-API // Contributed Client: https://github.com/microknights/Gender-API using MicroKnights.Gender_API; using Microsoft.Extensions.DependencyInjection; using System; using System.Net.Http; using System.Threading.Tasks; namespace GenderAPI { class Program { public static async Task RunTests(GenderApiClient client) { var responseStats = await client.GetStatistics(); if( responseStats.IsSuccess ) { Console.WriteLine($"IsLimitReached: {responseStats.IsLimitReached}"); Console.WriteLine($"Remaning requests: {responseStats.RemaningRequests}"); const string Name = "Frank Nielsen"; var responseName = await client.GetByNameAndCountry2Alpha(Name, "DK"); if( responseName.IsSuccess ) { Console.WriteLine($"{Name} is {responseName.GenderType.DisplayName}"); } else { Console.WriteLine($"ERRORS: {responseName.ErrorCode}-{responseName.Exception.Message}"); } } else { Console.WriteLine($"ERRORS: {responseStats.ErrorCode}-{responseStats.Exception.Message}"); } } public static Task UsingServiceProvider(string apiKey){ // client is thread-safe, and can be used static. var serviceProvider = new ServiceCollection() .UseGenderAPI(apiKey) .BuildServiceProvider(); return RunTests(serviceProvider.GetRequiredService<GenderApiClient>()); } public static Task PlainConsole(string apiKey){ // client is thread-safe, and can be used static. var client = new GenderApiClient( new HttpClient { BaseAddress = new Uri("https://gender-api.com") }, new GenderApiConfiguration { ApiKey = apiKey }); return RunTests(client); } static async Task Main(string[] args) { var apiKey = "?"; await PlainConsole(apiKey); await UsingServiceProvider(apiKey); } } }
أحاول إنشاء كل ثلاثة توائم من البيانات من Patas dataframe استنادًا إلى فئة أو تسمية. لنفترض أن لدي إطار بيانات بمعرف فريد لكل صف وفئة / تصنيف لكل صف. أريد ثلاثة توائم حيث يكون العنصران الأولان من نفس الفئة / التسمية ، والعنصر الأخير من فئة / تسمية مختلفة. أحاول أن أحصل على كل ثلاثة توائم. يمكنني إنشاء مجموعات من العناصر بنفس التسمية على ما يرام ، ولكن عندما أحاول توسيعها مع العناصر التي لها تصنيفات مختلفة ، أحصل على مصفوفة مليئة None . مثال dataframe: import pandas as pd import numpy as np df = pd.DataFrame({'uuid': np.arange(5), 'label': [0, 1, 1, 0, 0]}) print(df) label uuid 0 0 0 1 1 1 2 1 2 3 0 3 4 0 4 لاحظ ال uuid العمود مجرد عنصر نائب هنا. النقطة هي أنها فريدة لكل صف. ما يلي يولد جميع تركيبات نفس العناصر ويضعها في قائمة: import itertools as it labels = df.label.unique() all_combos = [] for l in labels: combos = list(it.combinations(df.loc[df.label == l].as_matrix(), 2)) all_combos.extend([list(c) for c in combos]) # convert to list because I anticipate needing to add to each combo later all_combos [[array([0, 0]), array([0, 3])], [array([0, 0]), array([0, 4])], [array([0, 3]), array([0, 4])], [array([1, 1]), array([1, 2])]] أريد الآن أن يتم إلحاق كل هذه التركيبات مع كل عنصر مختلف . أنا أحاول: for l in labels: combos = list(it.combinations(df.loc[df.label == l].as_matrix(), 2)) combo_list = [list(c) for c in combos] for c in combo_list: new_combos = [list(c).extend(s) for s in df.loc[df.label != l].as_matrix()] all_combos.append(new_combos) أتوقع: all_combos [[array([0, 0]), array([0, 3]), array([1, 1])], [array([0, 0]), array([0, 3]), array([1, 2])], [array([0, 0]), array([0, 4]), array([1, 1])], [array([0, 0]), array([0, 4]), array([1, 2])], [array([0, 3]), array([0, 4]), array([1, 1])], [array([0, 3]), array([0, 4]), array([1, 2])], [array([1, 1]), array([1, 2]), array([0, 0])], [array([1, 1]), array([1, 2]), array([0, 3])], [array([1, 1]), array([1, 2]), array([0, 4])]] انا حصلت: all_combos [[None, None], [None, None], [None, None], [None, None, None]] وهو أمر غريب حقًا: فهما ليسا بنفس الطول! ولكن لدي نفس العدد من None في نتيجتي كعدد متوقع من ثلاثة توائم صالحة. حاولت أيضا all_combos.extend(new_combos) وحصلت على قائمة ثنائية الأبعاد مكونة من 9 عناصر ، لذلك فقط نسخة مسطحة من النتيجة أعلاه. في الواقع أي مزيج من list.extend و list.append في السطرين الأخيرين من الحلقة الداخلية ، أعطني إما النتيجة الموضحة أعلاه ، أو نسخة مسطحة من هذه النتيجة ، لا معنى لأي منهما. تحرير: كما هو مذكور في التعليقات ، list.extend و list.append هي عمليات في مكانها ، لذلك لن يعيدوا أي شيء. كيف يمكنني إذن فهم قائمتي لإعطائي هذه القيم؟ أو إعادة بناء شيء آخر يعمل؟
MySQL, PostgreSQL, Oracle, Redis, and many more, you just name it — databases are a really important piece of technology in the progress of human civilization. Today we can see how valuable data are, and so keeping them safe and stable is where the database comes in! So we can see how important databases are as well. For a quite some time I was thinking of creating My Own Toy Database just to understand, play around, and experiment with it. As Richard Feynman said: “What I cannot create, I do not understand.” So without any further talking let’s jump into the fun part: coding. Let’s Start Coding… For this Toy Database, we’ll use Python (my favorite ❤️). I named this database FooBarDB (I couldn’t find any other name ?), but you can call it whatever you want! So first let’s import some necessary Python libraries which are already available in Python Standard Library: import json import os Yes, we only need these two libraries! We need json as our database will be based on JSON, and os for some path related stuff. Now let’s define the main class FoobarDB with some pretty basic functions, which I'll explain below. class FoobarDB(object): def __init__(self , location): self.location = os.path.expanduser(location) self.load(self.location) def load(self , location): if os.path.exists(location): self._load() else: self.db = {} return True def _load(self): self.db = json.load(open(self.location , "r")) def dumpdb(self): try: json.dump(self.db , open(self.location, "w+")) return True except: return False Here we defined our main class with an __init__ function. Whenever creating a Foobar Database we only need to pass the location of the database. In the first __init__ function we take the location parameter and replace ~ or ~user with user’s home directory to make it work intended way. And finally, put it in self.location variable to access it later on the same class functions. In the end, we are calling the load function passing self.location as an argument. . . . . def load(self , location): if os.path.exists(location): self._load() else: self.db = {} return True . . . . In the next load function we take the location of the database as a param. Then check if the database exists or not. If it exists, we load it with the _load() function (explained below). Otherwise, we create an empty in-memory JSON object. And finally, return true on success. . . . . def _load(self): self.db = json.load(open(self.location , "r")) . . . . In the _load function, we just simply open the database file from the location stored in self.location. Then we transform it into a JSON object and load it into self.db variable. . . . . def dumpdb(self): try: json.dump(self.db , open(self.location, "w+")) return True except: return False . . . . And finally, the dumpdb function: its name says what it does. It takes the in-memory database (actually a JSON object) from the self.db variable and saves it in the database file! It returns True if saved successfully, otherwise returns False. Make It a Little More Usable… ? Wait a minute! ? A database is useless if it can’t store and retrieve data, isn’t it? Let’s go and add them also…? . . . . def set(self , key , value): try: self.db[str(key)] = value self.dumpdb() return True except Exception as e: print("[X] Error Saving Values to Database : " + str(e)) return False def get(self , key): try: return self.db[key] except KeyError: print("No Value Can Be Found for " + str(key)) return False def delete(self , key): if not key in self.db: return False del self.db[key] self.dumpdb() return True . . . . The set function is to add data to the database. As our database is a simple key-value based database, we’ll only take a key and value as an argument. First, we’ll try to add the key and value to the database and then save the database. If everything goes right it will return True. Otherwise, it will print an error message and return False. (We don’t want it to crash and erase our data every time an error occurs ?). . . . . def get(self, key): try: return self.db[key] except KeyError: return False . . . . get is a simple function, we take key as an argument and try to return the value linked to the key from the database. Otherwise False is returned with a message. . . . . def delete(self , key): if not key in self.db: return False del self.db[key] self.dumpdb() return True . . . . delete function is to delete a key as well as its value from the database. First, we make sure the key is present in the database. If not we return False. Otherwise, we delete the key with the built-in del which automatically deletes the value of the key. Next, we save the database and it returns false. Now you might think, what if I’ve created a large database and want to reset it? In theory, we can use delete — but it's not practical, and it’s also very time-consuming! ⏳ So we can create a function to do this task... . . . . def resetdb(self): self.db={} self.dumpdb() return True . . . . Here’s the function to reset the database, resetdb! It's so simple: first, what we do is re-assign our in-memory database with an empty JSON object and it just saves it! And that's it! Our Database is now again clean shaven. Finally… ? That’s it friends! We have created our own Toy Database ! ?? Actually, FoobarDB is just a simple demo of a database. It’s like a cheap DIY toy: you can improve it any way you want. You can also add many other functions according to your needs. Full Source is Here ? bauripalash/foobardb I hope, you enjoyed it! Let me know your suggestions, ideas or mistakes I’ve made in the comments below! ? Thank you! See you soon! If You Like My Work (My Articles, Stories, Softwares, Researches and many more) Consider Buying Me A Coffee ☕ ?
PolitBERT Background This model was created to specialize on political speeches, interviews and press briefings of English-speaking politicians. Training The model was initialized using the pre-trained weights of BERTBASE and trained for 20 epochs on the standard MLM task with default parameters.The used learning rate was 5e-5 with a linearly decreasing schedule and AdamW.The used batch size is 8 per GPU while beeing trained on two Nvidia GTX TITAN X.The rest of the used configuration is the same as in AutoConfig.from_pretrained('bert-base-uncased').As a tokenizer the default tokenizer of BERT was used (BertTokenizer.from_pretrained('bert-base-uncased')) Dataset PolitBERT was trained on the following dataset, which has been split up into single sentences: https://www.kaggle.com/mauricerupp/englishspeaking-politicians Usage To predict a missing word of a sentence, the following pipeline can be applied: from transformers import pipeline, BertTokenizer, AutoModel fill_mask = pipeline("fill-mask", model=AutoModel.from_pretrained('maurice/PolitBERT'), tokenizer=BertTokenizer.from_pretrained('bert-base-uncased')) print(fill_mask('Donald Trump is a [MASK].')) Training Results Evaluation Loss: Training Loss: Learning Rate Schedule: Downloads last month 4
2D 5-Neighbor Cellular Automata Juanlast edited by Juan My question: if there is a better way to implement the rules without using a lot of "if statements"? The rule is: The base 2 digits of the rule number determines the CA evolution. The last bit specifies the state of a cell if all neighbors are OFF and it too is OFF. The next to last bit specifies the state of a cell if all neighbors are OFF but the cell itself is ON. Then each earlier pair of bits specifies what should happen if progressively (Totalistic) more neighbors are black. So, bits 2^0 and 2^1 apply if none of the four neighbors are ON, bits 2^2 and 2^3 apply if one neighbor is ON, bits 2^4 and 2^5 apply if two neighbors are ON, bits 2^6 and 2^7 apply if three neighbors are ON and bits 2^8 and 2^9 apply if all four neighbors are ON. (http://oeis.org/wiki/Index_to_2D_5-Neighbor_Cellular_Automata) Thank you! ''' 2D 5-neighbor cellular automata with drawbot, by Juan Feng See "A New Kind of Science" by Stephen Wolfram (p.170 - 179) http://www.wolframscience.com/nks/p171--cellular-automata/ ''' # square size cellSize = 10 # odd numbers only numCell = 55 if numCell % 2 == 0: numCell += 1 canvas = numCell * cellSize size (canvas, canvas) # type the number: 0 - 1023 rule = 462 ruleSet = bin(rule)[2:].zfill(10) print (ruleSet) # starting from one active black cell in the center grid = [[0 for x in range(numCell)]for y in range(numCell)] grid[int((numCell-1)/2)][int((numCell-1)/2)] = 1 # count neighbor numbers (considering Von Neumann neighbors) def nbs(grid, r, c): def get(r, c): if 0 <= r < len(grid) and 0 <= c < len(grid[r]): return grid[r][c] else: return 0 neighbors_list = [get(r-1, c), get(r, c-1), get(r, c+1), get(r+1, c)] return sum(map(bool, neighbors_list)) # step: 0 - inf step = 22 for i in range(step): newGrid = [] for r,u in enumerate(grid): newGrid.append([]) for c,v in enumerate(u): if grid[r][c] == 0 and nbs(grid,r,c) == 0: newGrid[r].append(int(ruleSet[9])) if grid[r][c] == 1 and nbs(grid,r,c) == 0: newGrid[r].append(int(ruleSet[8])) if grid[r][c] == 0 and nbs(grid,r,c) == 1: newGrid[r].append(int(ruleSet[7])) if grid[r][c] == 1 and nbs(grid,r,c) == 1: newGrid[r].append(int(ruleSet[6])) if grid[r][c] == 0 and nbs(grid,r,c) == 2: newGrid[r].append(int(ruleSet[5])) if grid[r][c] == 1 and nbs(grid,r,c) == 2: newGrid[r].append(int(ruleSet[4])) if grid[r][c] == 0 and nbs(grid,r,c) == 3: newGrid[r].append(int(ruleSet[3])) if grid[r][c] == 1 and nbs(grid,r,c) == 3: newGrid[r].append(int(ruleSet[2])) if grid[r][c] == 0 and nbs(grid,r,c) == 4: newGrid[r].append(int(ruleSet[1])) if grid[r][c] == 1 and nbs(grid,r,c) == 4: newGrid[r].append(int(ruleSet[0])) grid = newGrid # draw cells yy = 0 while yy * cellSize <= canvas - cellSize: xx = 0 while xx * cellSize <= canvas - cellSize: if grid[xx][yy] == 1: fill(0) else: fill(.7) rect(xx*cellSize, yy*cellSize, cellSize, cellSize) xx += 1 yy += 1 # saveImage('~/Desktop/2d_CA_test.png') jolast edited by jo hi juan, pretty nice! I hope somebody will post a more general and cleaner solution to this but one way would be remove all theifs and calculate the ruleSet position: val = 9 - 2 * nbs(grid,r,c) - grid[r][c] newGrid[r].append(int(ruleSet[val])) This line: return sum(map(bool, neighbors_list)) could be simplified to just: return sum(neighbors_list) Two remarks: If you are drawing the result in two colors you could just draw the background colour with onerectcovering the whole canvas and then draw black cells if the grid position is positive. Lists are not pythons fastest collection. It would need some rewriting but using a dictionary for the grid could speed this up. Good luck! UPDATE / ADDITION actually the whole check if a cell is active or not could be reduced to one quarter since there is a twofold symmetry. so just checking it once and then setting all four quarters. Juanlast edited by @jo Hi jo, thank you so much for the help, I'll try to rewrite by using the dictionary. I'm not sure if I got the "one quarter" approach, like how could I count the neighbor numbers? jolast edited by i would leave the center or starting point at (0, 0)and shift the origin withtranslate(canvas/2, canvas/2). If cell at(n, n)is positive add(n, n), (-n, n), (n, -n), (-n, -n)to the list or dict of active cells. hmmhmh not sure if that makes sense. for x in range(cell_amount): for y in range(cell_amount): if (x, y): #check if cell is active here grid[( x, y)] == 1 grid[(-x, y)] == 1 grid[( x, -y)] == 1 grid[(-x, -y)] == 1 hope that makes sense. good luck!
def rand_password(self): massive = [] from random import choice from string import digits string.ascii_letters # Подключение ASCII символов 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' a = int(self.ui.lineEdit.text()) count = 0 while count < a: num_or_letter = random.randint(1, 2) if num_or_letter < 2: massive.append(random.choice(string.ascii_letters)) else: massive.append(random.randint(0, 9)) count = count + 1 print(*massive, sep = '') self.ui.lineEdit_5.setText(*massive, sep = '') Этот код должен выдавать рандомный пароль. Но я не могу придумать как передать этот пароль, который создается в массиве, в прогу. Консоль выдает ошибку на то, что я имею сейчас. Traceback (most recent call last): File "C:\Users\mayer\Desktop\╨рэфюьрщч 4.0\main.py", line 62, in rand_password self.ui.lineEdit_5.setText(*massive, sep = '') TypeError: setText() takes no keyword arguments Как решить эту проблему?
I have a Lopy board that collects the sensor's data over serial and logs them to the local SD card and then publishes the data to the database using the MQTT protocol. Most of the time the mqtt works properly, but sometimes it was errored out (I don't know what does exactly the error message mean.). Below is the error message, I print the raw data on REPL every second and log the 5-min average. The error often happened after logging the data to the SD card. Code: Select all Raw:b'000.010,2.0,+27.2,031,1016.3,00,*01528\r\n', length:45 Raw:b'000.010,2.0,+27.2,031,1016.3,00,*01528\r\n', length:45 LOGGED: 2019-11-26 16:30:00,EST,9,0,2.0,27.2,31,1016.3,00,V Traceback (most recent call last): File "main.py", line 223, in <module> File "/flash/lib/mqtt.py", line 122, in publish OSError: [Errno 104] ECONNRESET Pycom MicroPython 1.20.0.rc13 [v1.9.4-94bb382] on 2019-08-22; LoPy4 with ESP32 Type "help()" for more information. >>> >>> Code: Select all self.sock.write(pkt, i + 1) Code: Select all 0x4 b'30:4f:00:00' Code: Select all 0x4 b'30:52:00:00' I had tried to add 50 ms delay to see if it solve the issue, but no help. If you have any clues or instructions that can address this issue, please let me know! Thank you in advance. I also copy the MQTT library that I used for my project. Code: Select all #!/usr/bin/env python # # Copyright (c) 2019, Pycom Limited. # # This software is licensed under the GNU GPL version 3 or any # later version, with permitted additional terms. For more information # see the Pycom Licence v1.0 document supplied with this file, or # available at https://www.pycom.io/opensource/licensing # import usocket as socket import ustruct as struct import time from ubinascii import hexlify class MQTTException(Exception): pass class MQTTClient: def __init__(self, client_id, server, port=0, user=None, password=None, keepalive=0, ssl=False, ssl_params={}): if port == 0: port = 8883 if ssl else 1883 self.client_id = client_id self.sock = None self.addr = socket.getaddrinfo(server, port)[0][-1] self.ssl = ssl self.ssl_params = ssl_params self.pid = 0 self.cb = None self.user = user self.pswd = password self.keepalive = keepalive self.lw_topic = None self.lw_msg = None self.lw_qos = 0 self.lw_retain = False def _send_str(self, s): self.sock.write(struct.pack("!H", len(s))) self.sock.write(s) def _recv_len(self): n = 0 sh = 0 while 1: b = self.sock.read(1)[0] n |= (b & 0x7f) << sh if not b & 0x80: return n sh += 7 def set_callback(self, f): self.cb = f def set_last_will(self, topic, msg, retain=False, qos=0): assert 0 <= qos <= 2 assert topic self.lw_topic = topic self.lw_msg = msg self.lw_qos = qos self.lw_retain = retain def connect(self, clean_session=True): self.sock = socket.socket() self.sock.connect(self.addr) if self.ssl: import ussl self.sock = ussl.wrap_socket(self.sock, **self.ssl_params) msg = bytearray(b"\x10\0\0\x04MQTT\x04\x02\0\0") msg[1] = 10 + 2 + len(self.client_id) msg[9] = clean_session << 1 if self.user is not None: msg[1] += 2 + len(self.user) + 2 + len(self.pswd) msg[9] |= 0xC0 if self.keepalive: assert self.keepalive < 65536 msg[10] |= self.keepalive >> 8 msg[11] |= self.keepalive & 0x00FF if self.lw_topic: msg[1] += 2 + len(self.lw_topic) + 2 + len(self.lw_msg) msg[9] |= 0x4 | (self.lw_qos & 0x1) << 3 | (self.lw_qos & 0x2) << 3 msg[9] |= self.lw_retain << 5 self.sock.write(msg) #print(hex(len(msg)), hexlify(msg, ":")) self._send_str(self.client_id) if self.lw_topic: self._send_str(self.lw_topic) self._send_str(self.lw_msg) if self.user is not None: self._send_str(self.user) self._send_str(self.pswd) resp = self.sock.read(4) assert resp[0] == 0x20 and resp[1] == 0x02 if resp[3] != 0: raise MQTTException(resp[3]) return resp[2] & 1 def disconnect(self): self.sock.write(b"\xe0\0") self.sock.close() def ping(self): self.sock.write(b"\xc0\0") def publish(self, topic, msg, retain=False, qos=0): pkt = bytearray(b"\x30\0\0\0") pkt[0] |= qos << 1 | retain sz = 2 + len(topic) + len(msg) if qos > 0: sz += 2 assert sz < 2097152 i = 1 while sz > 0x7f: pkt[i] = (sz & 0x7f) | 0x80 sz >>= 7 i += 1 pkt[i] = sz #print(hex(len(pkt)), hexlify(pkt, ":")) #time.sleep_ms(50) # add sleep for test self.sock.write(pkt, i + 1) #time.sleep_ms(50) # add sleep for test self._send_str(topic) if qos > 0: self.pid += 1 pid = self.pid struct.pack_into("!H", pkt, 0, pid) self.sock.write(pkt, 2) self.sock.write(msg) if qos == 1: while 1: op = self.wait_msg() if op == 0x40: sz = self.sock.read(1) assert sz == b"\x02" rcv_pid = self.sock.read(2) rcv_pid = rcv_pid[0] << 8 | rcv_pid[1] if pid == rcv_pid: return elif qos == 2: assert 0 def subscribe(self, topic, qos=0): assert self.cb is not None, "Subscribe callback is not set" pkt = bytearray(b"\x82\0\0\0") self.pid += 1 struct.pack_into("!BH", pkt, 1, 2 + 2 + len(topic) + 1, self.pid) #print(hex(len(pkt)), hexlify(pkt, ":")) self.sock.write(pkt) self._send_str(topic) self.sock.write(qos.to_bytes(1, 'little')) while 1: op = self.wait_msg() if op == 0x90: resp = self.sock.read(4) #print(resp) assert resp[1] == pkt[2] and resp[2] == pkt[3] if resp[3] == 0x80: raise MQTTException(resp[3]) return # Wait for a single incoming MQTT message and process it. # Subscribed messages are delivered to a callback previously # set by .set_callback() method. Other (internal) MQTT # messages processed internally. def wait_msg(self): res = self.sock.read(1) self.sock.setblocking(True) if res is None: return None if res == b"": raise OSError(-1) if res == b"\xd0": # PINGRESP sz = self.sock.read(1)[0] assert sz == 0 return None op = res[0] if op & 0xf0 != 0x30: return op sz = self._recv_len() topic_len = self.sock.read(2) topic_len = (topic_len[0] << 8) | topic_len[1] topic = self.sock.read(topic_len) sz -= topic_len + 2 if op & 6: pid = self.sock.read(2) pid = pid[0] << 8 | pid[1] sz -= 2 msg = self.sock.read(sz) self.cb(topic, msg) if op & 6 == 2: pkt = bytearray(b"\x40\x02\0\0") struct.pack_into("!H", pkt, 2, pid) self.sock.write(pkt) elif op & 6 == 4: assert 0 # Checks whether a pending message from server is available. # If not, returns immediately with None. Otherwise, does # the same processing as wait_msg. def check_msg(self): self.sock.setblocking(False) return self.wait_msg()
Make a Full Lexer in Python! What is a Lexer? A lexer is an analyzer that moves through your code looking at each character, and trying to create tokens out of them This input int a =5*5 Can be turned into [(‘KeyWord’, ‘int’), (‘ID’, ‘a’), (‘assign’, ‘=‘), (‘num’, 5), (‘OP’, ‘*’), (‘num’, 5)] by the Lexer you will learn how to create. What if I Have Problems? If you have trouble understanding something or you get errors, tell me and I’ll try my best to tell you what’s wrong. Let’s Get Started! First, open a new python repl with whatever name you choose. Then create a function lex. This will be our function that basically does everything :). Then, make a variable code set to input(). Make sure the code initialization is not in the function. And call lex on code. def lex(line): code = input() lex(code) After this set a new variable, in lex, and name it count or lexeme_count or something, set it to 0. def lex(line): lexeme_count = 0 code = input() lex(code) The lexeme_count variable is going to keep track of the chars you have already scanned. Once you have that code, add a while loop saying that if chars you have scanned is less than length of line, keep scanning. def lex(line): lexeme_count = 0 while lexeme_count < len(line): lexeme_count += 1 code = input() lex(code) We will then make it more powerful by knowing what each lexeme is. def lex(line): lexeme_count = 0 while lexeme_count < len(line): lexeme = line[lexeme_count] lexeme_count += 1 code = input() lex(code) Then we can tell what the type is by using an if-elif-else statement to check the type of lexeme. Make sure to move the lexeme_count += 1 part into the else. def lex(line): lexeme_count = 0 while lexeme_count < len(line): lexeme = line[lexeme_count] if lexeme.isdigit(): elif lexeme.isalpha(): else: lexeme_count += 1 code = input() lex(code) Let’s fill in the blank conditional blocks. def lex(line): lexeme_count = 0 while lexeme_count < len(line): lexeme = line[lexeme_count] if lexeme.isdigit(): typ, tok, consumed = lex_num(line[lexeme_count:]) lexeme_count += consumed elif lexeme.isalpha(): typ, tok, consumed = lex_str(line[lexeme_count:]) lexeme_count += consumed else: lexeme_count += 1 code = input() lex(code) Whoa, Slow Down! What’s Going on? What we’re doing here, is we are making three variables. One for the type of each token, one for the token itself, and one for the amount of lexemes ‘consumed’, ‘eaten’, or ‘scanned’ Then we are assigning those variables to a function call which takes the rest of the line, and gets the rest of the token. We do this both for digits and strings After this, we change the lexeme_count by the amount of chars consumed so they keep up with each other. Is this it? This is certainly not the full lexical analyzer, so let’s add some identifier lexing! Once we have finished with that, we can scan for literals, conditionals, operators, keywords, etc Let’s Lex Some Identifiers! Add another elif to the if-elif-else statement this will check if lexeme is equal to a a letter of the alphabet. def lex(line): lexeme_count = 0 while lexeme_count < len(line): lexeme = line[lexeme_count] if lexeme.isdigit(): typ, tok, consumed = lex_num(line[lexeme_count:]) lexeme_count += consumed elif lexeme == ‘“‘ or lexeme == “‘“: typ, tok, consumed = lex_str(line[lexeme_count:]) lexeme_count += consumed elif lexeme.isalpha(): else: lexeme_count += 1 code = input() lex(code) In this elif, we need to mirror what we did earlier, but with a call to a different function; lex_id(). def lex(line): lexeme_count = 0 while lexeme_count < len(line): lexeme = line[lexeme_count] if lexeme.isdigit(): typ, tok, consumed = lex_num(line[lexeme_count:]) lexeme_count += consumed elif lexeme == ‘“‘ or lexeme == “‘“: typ, tok, consumed = lex_str(line[lexeme_count:]) lexeme_count += consumed elif lexeme.isalpha(): typ, tok, consumed = lex_id(line[lexeme_count]) lexeme_count += consumed else: lexeme_count += 1 code = input() lex(code) Time To Make The functions! We used three functions, but we haven’t defined them. Let’s go ahead and do that. def lex_num(line): num= “” def lex_str(line): delimiter = line[0] string = “” def lex_id(line): id = “” First we’ll make the lex_num function go till the end of the line and return the number. def lex_num(line): num= “” for c in line: if not c.isdigit(): break return ‘num’, int(num), len(num) def lex_str(line): delimiter = line[0] string = “” def lex_id(line): id = “” We will then fill out the lex_str() function doing the same thing as the digit one but for a string instead. def lex_num(line): num= “” for c in line: if not c.isdigit(): break return ‘num’, int(num), len(num) def lex_str(line): delimiter = line[0] string = “” for c in line: string += c return ‘str’, string, len(string) def lex_id(line): id = “” And now we will fill out the lex_id() function! def lex_num(line): num= “” for c in line: if not c.isdigit(): break return ‘num’, int(num), len(num) def lex_str(line): delimiter = line[0] string = “” for c in line: if c==delimiter: break string += c return ‘str’, string, len(string) def lex_id(line): id = “” for c in line if not c.isdigit() and not c.isalpha and c != “_”: break id += c return ‘ID’, id, len(id) What About KeyWords? Yes, we will need to change the lex_id() function to know about key words... What are you waiting for, read on! We are going to make a list of keywords and check the id. def lex_num(line): num= “” for c in line: if not c.isdigit(): break return ‘num’, int(num), len(num) def lex_str(line): delimiter = line[0] string = “” for c in line: if c==delimiter: break string += c return ‘str’, string, len(string) def lex_id(line): keys = [‘print’, ‘var’, ‘while’, ‘if’, ‘elif’, ‘else’] id = “” for c in line if not c.isdigit() and not c.isalpha and c != “_”: break id += c if id in keys: return ‘key’, id, len(id) else: return ‘ID’, id, len(id) The Entire Code I know you want to go out and try this, but if you need it, here is the full working code. BTW if you copy and paste this code, it will result in an error because I use curly quotes and those are not used in programming ‘“‘“‘“‘“‘“‘, I guess you either have to manually take them out and replace them XD, or just look at this code as a reference. If you want to copy and paste :(, do it below on my better lexer def lex_num(line): num= “” for c in line: if not c.isdigit(): break return ‘num’, int(num), len(num) def lex_str(line): delimiter = line[0] string = “” for c in line: if c==delimiter: break string += c return ‘str’, string, len(string) def lex_id(line): keys = [‘print’, ‘var’, ‘while’, ‘if’, ‘elif’, ‘else’] id = “” for c in line if not c.isdigit() and not c.isalpha and c != “_”: break id += c if id in keys: return ‘key’, id, len(id) else: return ‘ID’, id, len(id) def lex(line): lexeme_count = 0 while lexeme_count < len(line): lexeme = line[lexeme_count] if lexeme.isdigit(): typ, tok, consumed = lex_num(line[lexeme_count:]) lexeme_count += consumed elif lexeme == ‘“‘ or lexeme == “‘“: typ, tok, consumed = lex_str(line[lexeme_count:]) lexeme_count += consumed elif lexeme.isalpha(): typ, tok, consumed = lex_id(line[lexeme_count]) lexeme_count += consumed else: lexeme_count += 1 code = input() lex(code)
Making your own programming language with Python Making your own programming language with Python Why make your own language? When you write your own programming language, you control the entire programmer experience. This allows you to shape exact how each aspect of your language works and how a developer interacts with it. This allows you to make a language with things you like from other languages and none of the stuff you don't. In addition, learning about programming language internals can help you better understand the internals of programming languages you use every day, which can make you a better programmer. How programming languages work Every programming language is different in the way it runs, but many consist of a couple fundamental steps: lexing and parsing. Introduction to Lexing Lexing is short for LEXical analysis. The lex step is where the language takes the raw code you've written and converts it into an easily parsable structure. This step interprets the syntax of your language and turns next into special symbols inside the language called tokens. For example, let's say you have some code you want to parse. To keep it simple I'll use python-like syntax, but could be anything. It doesn't even have to be text. # this is a commenta = (1 + 1) A lexer to parse this code might do the following: Discard all comments Produce a token that represents a variable name Produce left and right parenthesis tokens Convert literals like numbers or strings to tokens Produce tokens for nath operations like + - * /(and maybe bitwise/logical operators as well) The lexer will take the raw code and interpret it into a list of tokens. The lexer can also be used to insure that two pieces of code that may be different, like 1 + 1 and 1+1 are still parsed the same way. For the code above, it might generate tokens like this: NAME(a) EQUALS LPAREN NUMBER(1) PLUS NUMBER(2) RPAREN Tokens can be in many forms, but the main idea here is that they are a standard and easy to parse way of representing the code. Introduction to Parsing The parser is the next step in the running of your language. Now that the lexer has turned the text into consistent tokens, the parser simplifies and executes them. Parser rules recognize a sequence of tokens and do something about them. Let's look at a simple example for a parser with the same tokens as above. A simple parser could just say: If I see the GREETtoken and then aNAMEtoken, printHello,and the the name. A more complicated parser aiming to parse the code above might have these rules, which we will explore later: Try to classify as much code as possible as an expression. By "as much code as possible" I mean the parser will first try to consider a full mathematical operation as an expression, and then if that fails convert a single variable or number to an expression. This ensure that as much code as possible will be matched as an expression. The "expression" concept allows us to catch many patterns of tokens with one piece of code. We will use the expression in the next step. Now that we have a concept of an expression, we can tell the parser that if it sees the tokens NAME EQUALSand then an expression, that means a variable is being assigned. Using PLY to write your language What is PLY? Now that we know the basics of lexing and parsing, lets start writing some python code to do it. PLY stands for Python Lex Yacc. It is a library you can use to make your own programming language with python. Lex is a well known library for writing lexers. Yacc stands for "Yet Another Compiler Compiler" which means it compiles new languages, which are compilers themself. This tutorial is a short example, but the PLY documentation is an amazing resource with tons of examples. I would highly recommend that you check it out if you are using PLY. For this example, we are going to be building a simple calculator with variables. If you want to see the fully completed example, you can fork this repl: [TODO!!] Lexing with PLY lex Lexer tokens Lets start our example! Fire up a new python repl and follow along with the code samples. To start off, we need to import PLY: from ply import lex, yacc Now let's define our first token. PLY requires you to have a tokens list which contains every token the lexer can produce. Let's define our first token, PLUS for the plus sign: tokens = [ 'PLUS', ] t_PLUS = r'\+' A string that looks like r'' is special in python. The r prefix means "raw" which includes backslashes in the string. For example, to make define the string \+ in python, you could either do '\\+' or r'\+'. We are going to be using a lot of backslashes, so raw strings make things a lot easier. But what does \+ mean? Well in the lexer, tokens are mainly parsed using regexes. A regex is like a special programming language specifically for matching patterns in text. A great resource for regexes is regex101.com where you can test your regexes with syntax highlighting and see explanations of each part. I'm going to explain the regexes included in this tutorial, but if you want to learn more you can play around with regex101 or read one of the many good regex tutorials on the internet. The regex \+ means "match a single character +". We have to put a backshlash before it because + normally has a special meaning in regex so we have to "escape" it to show we want to match a + literally. We are also required to define a function that runs when the lexer encounters an error: def t_error(t): print(f"Illegal character {t.value[0]!r}") t.lexer.skip(1) This function just prints out a warning when it hits a character it doesn't recognize and then skips it (the !r means repr so it will print out quotes around the character). You can change this to be whatever you want in your language though. Optionally, you can define a newline token which isn't produced in the output of the lexer, but keeps track of each line. def t_newline(t): r'\n+' t.lexer.lineno += len(t.value) Since this token is a function, we can define the regex in docstring of the function instead. The function takes a paramater t, which is a special object representing the match that the lexer found. We can access the lexer using the the t.lexer attribute. This function matches at least one newline character and then increases the line number by the amount that it sees. This allows the lexer to known what line number its on at all times using the lexer.lineno variable. Now we can use the line number in our error function: def t_error(t): print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}") t.lexer.skip(1) Let's test out the lexer! This is just some temporary code, you don't have to know what this code does, because once we implement a parser, the parser will run the lexer for you. lexer = lex.lex() lexer.input('+') for token in lexer: print(token) Play around with the value passed to lex.input. You should notice that any character other than a plus sign makes the error message print out, but doesn't crash the program. In your language, you can make it gracefully ignore lex errors like this or make it stop running by editing the t_error function. If you add more lines to the input string, the line number in the error message should change. More complicated tokens Let's delete the test token add some more complicated tokens. Replace your tokens list and the t_PLUS line with the following code: reserved_tokens = { 'greet': 'GREET' } tokens = list(reserved_tokens.values()) + [ 'SPACE' ] t_SPACE = r'[ ]' def t_ID(t): r'[a-zA-Z_][a-zA-Z0-9_]*' if t.value in reserved_tokens: t.type = reserved_tokens[t.value] else: t.type = 'NAME' return t Let's explore the regex we have in the t_ID function. This regex is more complicated that the simple ones we've used before. First, we have [a-zA-Z_]. This is a character class in regex. It means, match any lowercase letter, uppercase letter, or underscore. Next we have [a-zA-Z0-9_]. This is the same as above except numbers are also included. Finally, we have *. This means "repeat the previous group or class zero to unlimited times". Why do we structure the regex like this? Having two separate classes makes sure that the first one must match for it to be a valid variable. If we exclude numbers from the first class, it not only doesn't match just regular numbers, but makes sure you can't start a variable with a number. You can still have numbers in the variable name, because they are matched by the second class of the regex. In the code, we first have a dictionary of reserved names. This is a mapping of patterns to the token type that they should be. The only one we have says that greet should be mapped to the GREET token. The code that sets up the tokens list takes all of the possible reserved token values, in this is example its just ['GREET'] and adds on ['SPACE'], giving us ['GREET', 'SPACE'] automatically! But why do we have to do this? Couldn't we just use something like the following code? # Don't use this code! It doesn't work! t_GREET = r'greet' t_SPACE = r'[ ]' t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*' Actually, if we used that code, greet would never be matched! The lexer would match it with the NAME token. In order to avoid this, we define a new type of token which is a function. This function has the regex as its docstring and is passed a t paramater. This paramater has a value attribute which is the pattern matched. The code inside this function simply checks if this value is one of the special reserved names we defined before. If it is, we set the special type attribute of the t paramter. This type controls the type of token which is produced from the pattern. When it sees the name greet, it will see greet is in the reserved names dictionary and produce a token type of GREET because that is the corresponding value in the dictionary. Otherwise, it will produce a NAME token because this is a regular variable. This allows you to add more reserved terms easily later, its as simple as adding a value to the dictionary. If needed, you could make also make the key of the reserved names dicitonary a regex and the match each regex against t.value in the function. If you want to change these rules for your language, feel free! Parsing with PLY yacc Fair warning: Yacc can sometimes be hard to use and debug, every if you know python well. Keep in mind, you don't have to use both lex and yacc, if you want you can just use lex and then write your own code to parse the tokens. With that said lets get started. Yacc basics Before we get started, delete the lexer testing code (everything from lexer.input onward). When we run the parser, the lexer is automatially run. Let's add our first parser rule! def p_hello(t): 'statement : GREET SPACE NAME' print(list(t)) print(f"Hello, {t[3]}") Let's break this down. Again, we have information on the rule in the docstring. This information is called a BNF Grammar. A statement in BNF Grammar consists of a grammar rule known as a non-terminal and terminals. In the example above, statement is the non-terminal and GREET SPACE NAME are terminals. The left-hand side describes what is produced by the rule, and the right-hand side describes what matches the rule. The right hand side can also have non-terminals in it, just be careful to avoid infinite loops. Basically, the yacc parser works by pushing tokens onto a stack, and looking at the current stack and the next token and seeing if they match any rules that it can use to simplify them. Here is a more in-depth explanation and example. Before the above example can run, we still have to add some more code. Just like for the lexer, the error handler is required: def p_error(t): if t is None: # lexer error, already handled return print(f"Syntax Error: {t.value!r}") Now let's create and run the parser: parser = yacc.yacc() parser.parse('greet replit') If you run this code you should see: [None, 'greet', ' ', 'replit'] Hello, replit The first line is the list version of the object passed to the parser function. The first value is the statement that will be produced from the function, so it is None. Next, we have the values of the tokens we specified in the rule. This is where the t[3] part comes from. This is the third item in the array, which is the NAME token, so our parser prints out Hello, replit! Note: Creating the parser tables is a relatively expensive operation, so the parser creates a file called parsetab.pywhich it can load the parse tables from if they haven't changed. You can change this filename by passing a kwarg into theyaccinitialization, likeparser = yacc.yacc(tabmodule='fooparsetab') More complicated parsing: Calculator This example is different from our running example, so I will just show a full code example and explain it. from ply import lex, yacc tokens = ( 'NUMBER', 'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'LPAREN', 'RPAREN', ) t_PLUS = r'\+' t_MINUS = r'-' t_TIMES = r'\*' t_DIVIDE = r'/' t_LPAREN = r'\(' t_RPAREN = r'\)' def t_NUMBER(t): r'\d+' try: t.value = int(t.value) except ValueError: print(f"Integer value too large: {t.value}") t.value = 0 return t def t_newline(t): r'\n+' t.lexer.lineno += len(t.value) def t_error(t): print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}") t.lexer.skip(1) t_ignore = ' \t' lexer = lex.lex() # Parsing def p_expression_binop(t): '''expression : expression PLUS expression | expression MINUS expression | expression TIMES expression | expression DIVIDE expression''' if t[2] == '+' : t[0] = t[1] + t[3] elif t[2] == '-': t[0] = t[1] - t[3] elif t[2] == '*': t[0] = t[1] * t[3] elif t[2] == '/': t[0] = t[1] / t[3] def p_expression_group(t): 'expression : LPAREN expression RPAREN' t[0] = t[2] def p_expression_number(t): 'expression : NUMBER' t[0] = t[1] def p_error(t): if t is None: # lexer error return print(f"Syntax Error: {t.value!r}") parser = yacc.yacc() if __name__ == "__main__": while True: inp = input("> ") print(parser.parse(inp)) First we start off with the tokens: numbers, mathematical operations, and parenthesis. You might notice that I didn't use the reserved_tokens trick, but you can implement it if you want. Next we have a simple number token which matches 0-9 with \d+ and then converts its value from a string to an integer. The next code we haven't used before is t_ignore. This variable represents a list of all characters the lexer should ignore, which is \t which means spaces and tabs. When the lexer sees these, it will just skip them. This allows users to add spaces without it affecting the lexer. Now we have 3 parser directives. The first is a large one, producing an expression from 4 possible input values, one for each math operation. Each input has an expression on either side of the math operator. Inside this directive, we have some (pretty ugly) code that performs the correct operation based on the operation token given. If you want to make this prettier, consider a dictionary using the python stdlib operator module. Next, we define an expression with parenthesis around it as being the same as the expression inside. This makes parenthesis value be substituted in for them, making them evaluate inside first. With very little code we created a very complicated rule that can deal with nested parenthesis correctly. Finally, we define a number as being able to be an expression, which allows a number to be used as one of the expressions in rule 1. For a challenge, try adding variables into this calculator! You should be able to set variables by using syntax likevarname = any_expressionand you should be able to use variables in expressions. If you're stuck, see one solution from the PLY docs. Thats it! Thanks for reading! If you have questions, feel free to ask on the Replit discord's #help-and-reviews channel, or just the comments. Have fun!
Learn To Code In Python Teaches you how to code in python. By PYer This tutorial excpects some basic knowledge of coding in another language. What is python? Python is a very popular coding language. Little people use it for serious projects, but it is still useful to learn. It was created in 1991 by Guido van Rossum. Look at a few uses of python: Desktop Applications Web Applications Complex Scientific Equations Let's look at a few reasons why it is useful: Readable/Understandable Code Compatible with other systems/platforms Millions of useful modules These are just a few, you can find a bunch more by researching it. Know This Before We Start What we will be teaching you is specifically python 3. This is the most updated version, but the version 2 is still widely used. Here we will be using replit, but there are multiple text editors you can find. Emacs Komodo Edit Vim Sublime Text More at Python Text Editors Python Syntax Python syntax was made for readability, and easy editing. For example, the python language uses a : and indented code, while javascript and others generally use {} and indented code. First Program Lets create a python 3 repl, and call it Hello World. Now you have a blank file called main.py. Now let us write our first line of code: helloworld.py print('Hello world!') Brian Kernighan actually wrote the first "Hello, World!" program as part of the documentation for the BCPL programming language developed by Martin Richards. Now, press the run button, which obviously runs the code. If you are not using replit, this will not work. You should research how to run a file with your text editor. Command Line If you look to your left at the console where hello world was just printed, you can see a >, >>>, or $ depending on what you are using. After the prompt, try typing a line of code. Python 3.6.1 (default, Jun 21 2017, 18:48:35) [GCC 4.9.2] on linux Type "help", "copyright", "credits" or "license" for more information. > print('Testing command line') Testing command line > print('Are you sure this works?') Are you sure this works? > The command line allows you to execute single lines of code at a time. It is often used when trying out a new function or method in the language. New: Comments! Another cool thing that you can generally do with all languages, are comments. In python, a comment starts with a #. The computer ignores all text starting after the #. shortcom.py # Write some comments! If you have a huge comment, do not comment all the 350 lines, just put ''' before it, and ''' at the end. Technically, this is not a comment but a string, but the computer still ignores it, so we will use it. longcom.py ''' Dear PYer, I am confused about how you said you could use triple quotes to make SUPER LONG COMMENTS ! I am wondering if this is true, and if so, I am wondering if this is correct. Could you help me with this? Thanks, Random guy who used your tutorial. ''' print('Testing') New: Variables! Unlike many other languages, there is no var, let, or const to declare a variable in python. You simply go name = 'value'. vars1.py x = 5 y = 7 z = x*y # 35 print(z) # => 35 Remember, there is a difference between integers and strings. Remember: String = "". To convert between these two, you can put an int in a str() function, and a string in a int() function. There is also a less used one, called a float. Mainly, these are integers with decimals. Change them using the float() command. vars2.py x = 5 x = str(x) b = '5' b = int(b) print('x = ', x, '; b = ', str(b), ';') # => x = 5; b = 5; Instead of using the , in the print function, you can put a + to combine the variables and string. Operators There are many operators in python: + - / * These operators are the same in most languages, and allow for addition, subtraction, division, and multiplicaiton. Now, we can look at a few more complicated ones: % // ** += -= /= *= Research these if you want to find out more... simpleops.py x = 4 a = x + 1 a = x - 1 a = x * 2 a = x / 2 You should already know everything shown above, as it is similar to other languages. If you continue down, you will see more complicated ones. complexop.py a += 1a -= 1a *= 2a /= 2 The ones above are to edit the current value of the variable. Sorry to JS users, as there is no i++; or anything. Fun Fact: The python language was named after Monty Python. If you really want to know about the others, view Py Operators More Things With Strings Like the title? Anyways, a ' and a " both indicate a string, but do not combine them! quotes.py x = 'hello' # Goodx = "hello" # Goodx = "hello' # ERRORRR!!! slicing.py String Slicing You can look at only certain parts of the string by slicing it, using [num:num]. The first number stands for how far in you go from the front, and the second stands for how far in you go from the back. x = 'Hello everybody!'x[1] # 'e'x[-1] # '!'x[5] # ' 'x[1:] # 'ello everybody!'x[:-1] # 'Hello everybod'x[2:-3] # 'llo everyb' Methods and Functions Here is a list of functions/methods we will go over: .strip() len() .lower() .upper() .replace() .split() I will make you try these out yourself. See if you can figure out how they work. strings.py x = " Testing, testing, testing, testing " print(x.strip()) print(len(x)) print(x.lower()) print(x.upper()) print(x.replace('test', 'runn')) print(x.split(',')) Good luck, see you when you come back! New: Input() Input is a function that gathers input entered from the user in the command line. It takes one optional parameter, which is the users prompt. inp.py print('Type something: ') x = input() print('Here is what you said: ', x) If you wanted to make it smaller, and look neater to the user, you could do... inp2.py print('Here is what you said: ', input('Type something: ')) Running:inp.py Type something:Hello WorldHere is what you said: Hello World inp2.py Type something: Hello WorldHere is what you said: Hello World New: Importing Modules Python has created a lot of functions that are located in other .py files. You need to import these modules to gain access to the,, You may wonder why python did this. The purpose of separate modules is to make python faster. Instead of storing millions and millions of functions, , it only needs a few basic ones. To import a module, you must write input <modulename>. Do not add the .py extension to the file name. In this example , we will be using a python created module named random. module.py import random Now, I have access to all functions in the random.py file. To access a specific function in the module, you would do <module>.<function>. For example: module2.py import random print(random.randint(3,5)) # Prints a random number between 3 and 5 Pro Tip: Dofrom random import randintto not have to dorandom.randint(), justrandint() To import all functions from a module, you could dofrom random import * New: Loops! Loops allow you to repeat code over and over again. This is useful if you want to print Hi with a delay of one second 100 times. for Loop The for loop goes through a list of variables, making a seperate variable equal one of the list every time. Let's say we wanted to create the example above. loop.py from time import sleep for i in range(100): print('Hello') sleep(.3) This will print Hello with a .3 second delay 100 times. This is just one way to use it, but it is usually used like this: loop2.py import time for number in range(100): print(number) time.sleep(.1) while Loop The while loop runs the code while something stays true. You would put while <expression>. Every time the loop runs, it evaluates if the expression is True. It it is, it runs the code, if not it continues outside of the loop. For example: while.py while True: # Runs forever print('Hello World!') Or you could do: while2.py import random position = '<placeholder>' while position != 1: # will run at least once position = random.randint(1, 10) print(position) New: if Statement The if statement allows you to check if something is True. If so, it runs the code, if not, it continues on. It is kind of like a while loop, but it executes only once. An if statement is written: if.py import random num = random.randint(1, 10) if num == 3: print('num is 3. Hooray!!!') if num > 5: print('Num is greater than 5') if num == 12: print('Num is 12, which means that there is a problem with the python language, see if you can figure it out. Extra credit if you can figure it out!') Now, you may think that it would be better if you could make it print only one message. Not as many that are True. You can do that with an elif statement: elif.py import random num = random.randint(1, 10) if num == 3: print('Num is three, this is the only msg you will see.') elif num > 2: print('Num is not three, but is greater than 1') Now, you may wonder how to run code if none work. Well, there is a simple statement called else: else.py import random num = random.randint(1, 10) if num == 3: print('Num is three, this is the only msg you will see.') elif num > 2: print('Num is not three, but is greater than 1') else: print('No category') New: Functions (def) So far, you have only seen how to use functions other people have made. Let use the example that you want to print the a random number between 1 and 9, and print different text every time. It is quite tiring to type: Characters: 389 nofunc.py import random print(random.randint(1, 9)) print('Wow that was interesting.') print(random.randint(1, 9)) print('Look at the number above ^') print(random.randint(1, 9)) print('All of these have been interesting numbers.') print(random.randint(1, 9)) print("these random.randint's are getting annoying to type") print(random.randint(1, 9)) print('Hi') print(random.randint(1, 9)) print('j') Now with functions, you can seriously lower the amount of characters: Characters: 254 functions.py import random def r(t): print(random.randint(1, 9)) print(t) r('Wow that was interesting.') r('Look at the number above ^') r('All of these have been interesting numbers.') r("these random.randint's are getting annoying to type") r('Hi') r('j') There you go! Try making your own functions! The End Now you know all of the basics of python. Congratulations! Please upvote. Thanks!
[1] Python Made EZ! 🐍 Hîïíīįì everyone! Hope y'all are doing great! School is starting real soon, so I hope you have been studying to get ready you are enjoying the last of vacation! So I made this tutorial on python so that others can try to learn from it and get better! Hopefully, what I say will be comprehensive and easy to read. Most of it I will write, but sometimes I will include some stuff from other websites which explain better than me. I will put what I've taken in italic, and the sources and helpful links at the bottom. By the way, this is the first of tutorials in languages I'm making! I will be covering: Hello World!: History of Python Key Terms Comments print Data Types Variables - Printing Variables - Naming Variables - Changing Variables Concatenation Operators Comparison Operators Conditionals -if -elif -else input A Bit of Lists forLoops whileLoops Functions Imports -time -random -math Small Programs and Useful Stuff ANSIEscape Codes Links Goodbye World!: End Well without any further ado, let's get on with it! Hello World!: History of Python Python is a general purpose programming language. It was created by Guido Van Rossum and released in 1991. One of the main features of it is its readability, simple syntax, and few keywords, which makes it great for beginners (with no prior experience of coding) to learn it. Fun fact: Guido Van Rossum was reading the scripts of Monty Python when he was creating the language; he needed "a name that was short, unique, and slightly mysterious" so he decided to call the language Python. (Last year we had to make a poem on a important person in Computer Science, so I made one on him: https://docs.google.com/document/d/1yf2T2fFaS3Vwk7zkvN1nPOr8XPXJroL1yHI7z5qhaRc/edit?usp=sharing) Key Terms Now before we continue, just a few words you should know: Console: The black part located at the right/bottom of your screen Input: stuff that is taken in by the computer (more on this later) Ouput: the information processed and sent out by the computer (usually in the console) Errors: actually, a good thing! Don't worry if you have an error, just try to learn from it and correct it. That's how you can improve, by knowing how to correct errors. Execute: run a piece of code Comments Comments are used for explaining your code, making it more readable, and to prevent execution when testing code. This is how to comment: # this is a comment# it starts with a hashtag ## Python will ignore and not run anything after the hashtag You can also have multiline comments: """this is a multiline commentI can make it very long!""" The print() functions is used for outputting a message (object) onto the console. This is how you use it: print("Something.") # remember this is a comment # you can use double quotes " # or single quotes ' print('Using single quotes') print("Is the same as using double quotes") You can also triple quotes for big messages. Example: print("Hello World!") print(""" Rules: [1] Code [2] Be nice [3] Lol [4] Repeat """) Output: Hello World!Rules:[1] Code [2] Be nice[3] Lol[4] Repeat Data Types Data types are the classification or categorization of data items. These are the 4 main data types: int: (integer) a whole number 12 is an int, so is 902. str: (string) a sequence of characters "Hi" is a str, so is "New York City". float: (float) a decimal -90 is a float, so is 128.84 bool: (boolean) data type with 2 possible values; True and False Note that True has a capital T and False has a capital! F Variables Variables are used for containing/storing information. Example: name = "Lucy" # this variable contains a str age = 25 # this variable contains an int height = 160.5 # this variable contains a float can_vote = True # this variable contains a Boolean that is True (because Lucy is 25 y/o) Printing variables: To print variables, you simply do print(variableName): print(name) print(age) print(height) print(can_vote) Output: Lucy 25 160.5 True Naming Variables: You should try to make variables with a descriptive name. For example, if you have a variable with an age, an appropriate name would be age, not how_old or number_years. Some rules for naming variables: must start with a letter (not a number) no spaces (use underscores) no keywords (like print,input,or, etc.) Changing Variables: You can change variables to other values. For example: x = 18 print(x) x = 19 print(x) # the output will be: # 18 # 19 As you can see, we have changed the variable x from the initial value of 18 to 19. Concatenation Let's go back to our first 3 variables: name = "Lucy" age = 25 height = 160.5 What if we want to make a sentence like this:Her name is Lucy, she is 25 years old and she measures 160.5 cm. Of course, we could just print that whole thing like this:print("Her name is Lucy, she is 25 years old and she measures 160.5 cm.") But if we want to do this with variables, we could do it something like this: print("Her name is " + name + ", she is " + age + " years old and she measures " + height + " cm.") # try running this! Aha! If you ran it, you should have gotten this error: Basically, it means that you cannot concatenate int to str. But what does concatenate mean? Concatenate means join/link together, like the concatenation of "sand" and "castle" is "sandcastle" In the previous code, we want to concatenate the bits of sentences ("Her name is ", ", she is", etc.) as well as the variables (name, age, and height). Since the computer can only concatenate str together, we simply have to convert those variables into str, like so: print("Her name is " + name + ", she is " + str(age) + " years old and she measures " + str(height) + " cm.") # since name is already a str, no need to convert it Output: Her name is Lucy, she is 25 years old and she measures 160.5 cm. Operators A symbol or function denoting an operation Basically operators can be used in math. List of operators: +For adding numbers (can also be used for concatenation) | Eg: 12 + 89 = 101 -For subtracting numbers | Eg: 65 - 5 = 60 *For multiplying numbers | Eg: 12 * 4 = 48 /For dividing numbers | Eg: 60 / 5 = 12 **Exponentiation ("to the power of") | Eg: 2**3 = 8 //Floor division (divides numbers and takes away everything after the decimal point) | Eg: 100 // 3 = 33 %Modulo (divides numbers and returns whats left (remainder)) | Eg: 50 % 30 = 20 These operators can be used for decreasing/increasing variables. Example: x = 12 x += 3 print(x) # this will output 15, because 12 + 3 = 15 You can replace the + in += by any other operator that you want: x = 6 x *= 5 print(x) y = 9 y /= 3 print(y) # this will output 30 and then below 3. Also: x += y is just a shorter version of writing x = x + y; both work the same Comparison Operators Comparsion operators are for, well, comparing things. They return a Boolean value, True or False. They can be used in conditionals. List of comparison operators: ==equal to | Eg: 7 == 7 !=not equal to | Eg: 7 != 8 >bigger than | Eg: 12 > 8 <smaller than | Eg: 7 < 9 >=bigger than or equal to | Eg: 19 >= 19 <=smaller than or equal to | Eg: 1 <= 4 If we type these into the console, we will get either True or False: 6 > 7 # will return False 12 < 80 # will return True 786 != 787 # will return True 95 <= 96 # will return True Conditionals Conditionals are used to verify if an expression is True or False. if Example: we want to see if a number is bigger than another one. How to say in english: "If the number 10 is bigger than the number 5, then etc. How to say it in Python: if 10 > 5: # etc. All the code that is indented will be inside that if statement. It will only run if the condition is verified. You can also use variables in conditionals: x = 20 y = 40 if x < y: print("20 is smaller than 40"!) # the output of this program will be "20 is smaller than 40"! because the condition (x < y) is True. elif elif is basically like if; it checks if several conditions are True Example: age = 16 if age == 12: print("You're 12 years old!") elif age == 14: print("You're 14 years old!") elif age == 16: print("You're 16 years old!") This program will output: You're 16 years old! Because age = 16. else else usually comes after the if/elif. Like the name implies, the code inside it only executes if the previous conditions are False. Example: age = 12 if age >= 18: print("You can vote!") else: print("You can't vote yet!) Output: You can't vote yet! Because age < 18. input The input function is used to prompt the user. It will stop the program until the user types something and presses the return key. You can assign the input to a variable to store what the user types. For example: username = input("Enter your username: ") # then you can print the username print("Welcome, "+str(username)+"!") Output: Enter your username: Bookie0Welcome, Bookie0! By default, the input converts what the user writes into str, but you can specify it like this: number = int(input("Enter a number: ")) # converts what the user says into an int # if the user types a str or float, then there will be an error message. # doing int(input()) is useful for calculations, now we can do this: number += 10 print("If you add 10 to that number, you get: "+ str(number)) # remember to convert it to str for concatenation! Output: Enter a number: 189If you add 10 to that number, you get: 199 You can also do float(input("")) to convert it to float. Now, here is a little program summarizing a bit of what you've learnt so far. Full program: username = input("Username: ") password = input("Password: ") admin_username = "Mr.ADMIN" admin_password = "[email protected]" if username == admin_username: if password == admin_password: print("Welcome Admin! You are the best!") else: print("Wrong password!") else: print("Welcome, "+str(username)+"!") Now a detailed version: # inputs username = input("Username: ") # asks user for the username password = input("Password: ") # asks user for the password # variables admin_username = "Mr.ADMIN" # setting the admin username admin_password = "[email protected]" # setting the admin passsword # conditionals if username == admin_username: # if the user entered the exact admin username if password == admin_password: # if the user enters the exact and correct admin password print("Welcome Admin! You are the best!") # a welcome message only to the admin else: # if the user gets the admin password wrong print("Error! Wrong password!") # an error message appears else: # if the user enters something different than the admin username print("Welcome, general user "+str(username)+"!") # a welcome message only for general users Output: An option: Username: Mr.ADMINPassword: i dont knowError! Wrong password! Another option: Username: Mr.ADMINPassword: [email protected]Welcome Admin! You are the best! Final option: Username: BobPassword: Chee$eWelcome, general user Bob! A bit of lists A list is a collection which is ordered and changeable. They are written with square braquets: [] meat = ["beef", "lamb", "chicken"] print(meat) Output: ['beef', 'lamb', 'chicken'] You can access specific items of the list with the index number. Now here is the kinda tricky part. Indexes start at 0, meaning that the first item of the list has an index of 0, the second item has an index of 1, the third item has an index of 2, etc. meat = ["beef", "lamb", "chicken"] # Index: 0 1 2 etc. print(meat[2]) # will output "chicken" because it is at index 2 You can also use negative indexing: index -1 means the last item, index -2 means the second to last item, etc. meat = ["beef", "lamb", "chicken"] # Index: -3 -2 -1 etc. print(meat[-3]) # will output "beef" because it is at index -3 You can add items in the list using append(): meat = ["beef", "lamb", "chicken"] meat.append("pork") print(meat) Output: ['beef', 'lamb', 'chicken', 'pork'] "pork" will be added at the end of the list. For removing items in the list, use remove(): meat = ['beef', 'lamb', 'chicken'] meat.remove("lamb") print(meat) Output: ['beef', 'chicken'] You can also use del to remove items at a specific index: meat = ['beef', 'lamb', 'chicken'] del meat[0] print(meat) Output: ['lamb', 'chicken'] There are also many other things you can do with lists, check out this: https://www.w3schools.com/python/python_lists.asp for more info! for loops A for loop is used for iterating over a sequence. Basically, it runs a piece of code for a specific number of times. For example: for i in range(5): print("Hello!") Output: Hello!Hello!Hello!Hello!Hello! You can also use the for loop to print each item in a list (using the list from above): meat = ['beef', 'lamb', 'chicken'] for i in meat: print(i) Output: beeflambchicken while loops while loops will run a piece of code as long as the condition is True. For example: x = 1 # sets x to 1 while x <= 10: # will repeat 10 times print(x) # prints x x += 1 # increments (adds 1) to x Ouput: 12345678910 You can also make while loops go on for infinity, like so (useful for spamming lol): while True: print("Theres no stopping me nowwwww!") Output: Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!# etc until infinity Functions Functions are a group of code that will only execute when it is called. For example, instead having to type a piece of code several times, you can use a function to put that piece of code inside, and then when you need to use it, you can just call it. def greeting(): # defining the function print("Bonjour!") # everything that is indented will be executed when the function is called greeting() # calling the function # you can now call this function when you want, instead of always writing the same code everytime Output: Bonjour! return and arguments The return statement is used in function. It ends the function and "returns" the result, i.e. the value of the expression following the return keyword, to the caller. It is not mandatory; you don't have to use it. You can also have arguments inside a functions. This allows you to change the function values. The arguments are in the parenthesis. For example: def sum(x, y): # x and y are the arguments total = x + y return total # "assigns" x + y to the function result = sum(4, 5) # you can change those to what you want print(result) # this will output 9, because 4+5 = 9 Imports time You can use time in your Python programs. How to make the program wait: # first import time import time print("Hello!") # then for the program to wait time.sleep(1) # write how long you want to wait (in seconds) in the parenthesis print("Bye!") Output: Hello!# program will wait 1 secondBye! You can also do this (more simpler): import time from time import sleep # instead of time.sleep(), do sleep() # its the same print("time.sleep(1)...") time.sleep(1) print("...is the same as...") sleep(1) print("sleep(1)!") random You can use the random module to randomly pick numbers with randint(): # remember to import! import random from random import randint rand_num = randint(1,5) # this will output a random number between 1 and 5 inclusive! # this means the possible numbers are 1, 2, 3, 4, or 5 The reason I am precising this is because you can also use randrange(): import random from random import randrange rand_num = randrange(1,5) # this will output a random number between 1 inclusive and 5 NON-inclusive (or 4 inclusive)! # this means the possible numbers are 1, 2, 3, or 4 You can also randomly pick an item from a list with choice(): import random from random import choice meat = ["beef", "lamb", "chicken"] rand_meat = choice(meat) print(rand_meat) # this will output a randomly chosen item of the list meat # the possible outcomes are beef, lamb, or chicken. math First, you already have some functions already built in Python: min() and max(). They return the smallest and biggest value of ints inside the parenthesis, respectively. For example: list_a = min(18, 12, 14, 16) list_b = max(17, 19, 15, 13) print(list_a) # will output 12 print(list_b) # will output 19 Now for some more modules: You can use math.floor() and math.ceil() to round up numbers to the nearest or highest int. For example: # first import import math num_a = math.floor(2.3) num_b = math.ceil(2.3) print(num_a) # will output 2 print(num_b) # will output 3 Explanation (from Andrew Sutherland's course): So math.floor() will round up 2.3 to the nearest lowest int, which in this case is 2. This is because, if you imagine it, the floor is on the bottom, so thats why it will round the number to the nearest lowest int. Vice-versa for math.ceil(); it will round up 2.3 to the nearest highest int, which in this case is 3. This is because ceil is short for ceiling (programmers like to shorten words), and the ceiling is high. You can also get pi π: import math pi = math.pi print(pi) Output: 3.141592653589793 Here is the full list of all the things you can do with math: https://www.w3schools.com/python/module_math.asp Small Programs You Can Use Countdown Program: # imports import time from time import sleep def countdown(): # making a function for the countdown (so you can use it several times) count = int(input("Countdown from what? ")) # asks user how long the countdown while count >= 0: # will repeat until count = 0 print(count) # prints where the countdown is at count -= 1 # subtracts 1 from count sleep(1) # program waits 1 second before continuing print("End of countdown!") # message after the countdown countdown() # remember to call the function or nothing will happen Output: Countdown from what? 5543210End of countdown! Simple Calculator First way using eval() calculation = input("Type your calculation: ") # asks the user for a calculation. print("Answer to " + str(calculation) + ": " + str(eval(calculation))) # eval basically does the operation, like on a normal calculator. # however, if you write something different than a valid operaion, there will be an error. Or another way, using several conditionals, and you can only do "something" + "something" (but with the operators): def calculator(): # making a function to hold all the code for calculator while True: # loops forever so you can make several calculations without having to press run again first_num = int(input("Enter 1st number: ")) # asks user for 1st number second_num = int(input("Enter 2nd number: ")) # asks user for 2nd number operator = input("Select operator: + - * / ** // ") # asks user for operator if operator == "+": # addition answer = first_num + second_num print(answer) elif operator == "-": # subtraction answer = first_num - second_num print(answer) elif operator == "*": # multiplication answer = first_num * second_num print(answer) elif operator == "/": # division answer = first_num / second_num print(answer) elif operator == "**": # exponentiation ("to the power of") answer = first_num ** second_num print(answer) elif operator == "//": # floor division answer = first_num // second_num print(answer) else: # if user selects an invalid operator print("Invalid!") calculator() # calls the function But obviously that is pretty long and full of many if/elif. Some functions that are useful: "Press ENTER to continue" Prompt: def enter(): input("Press ENTER to continue! ") # this is useful for text based adventure games; when they finish reading some text, they can press ENTER and the next part will follow. # just call the function where you need it Spacing in between lines function: def space(): print() print() # same as pressing ENTER twice, this is useful to make your text a bit more airy, makes it less compact and block like. Slowprint: # first imports: import time, sys from time import sleep def sp(str): for letter in str: sys.stdout.write(letter) sys.stdout.flush() time.sleep(0.06) print() # to use it: sp("Hello there!") # this will output Hello There! one letter every 0.06 seconds, making it look like the typewriter effect. ANSI Escape Codes ANSI escape codes are for controlling text in the console. You can use it to make what is in the output nicer for the user. For example, you can use \n for a new line: name = input("Enter your name\n>>> ") Output: Enter your name>>> This makes it look nice, you can start typing on the little prompt arrows >>>. You can also use \t for tab: print("Hello\tdude") Output: Hello dude \v for vertical tab: print("Hello\vdude") Output: Hello dude You can also have colors in python: # the ANSI codes are stored in variables, making them easier to use black = "\033[0;30m" red = "\033[0;31m" green = "\033[0;32m" yellow = "\033[0;33m" blue = "\033[0;34m" magenta = "\033[0;35m" cyan = "\033[0;36m" white = "\033[0;37m" bright_black = "\033[0;90m" bright_red = "\033[0;91m" bright_green = "\033[0;92m" bright_yellow = "\033[0;93m" bright_blue = "\033[0;94m" bright_magenta = "\033[0;95m" bright_cyan = "\033[0;96m" bright_white = "\033[0;97m" # to use them: print(red+"Hello") # you can also have multiple colors: print(red+"Hel"+bright_blue+"lo") # and you can even use it with the slowPrint I mentioned earlier! Output: And you can have underline and italic: reset = "\u001b[0m" underline = "\033[4m" italic = "\033[3m" # to use it: print(italic+"Hello "+reset+" there "+underline+"Mister!") # the reset is for taking away all changes you've made to the text # it makes the text back to the default color and text decorations. Output: Links: Sources and Good Websites Sources: Always good to use a bit of help from here and there! W3 Schools: https://www.w3schools.com/python/default.asp Wikipedia: https://en.wikipedia.org/wiki/Guido_van_Rossum Wikipedia: https://en.wikipedia.org/wiki/ANSI_escape_code https://www.python-course.eu/python3_functions.php#:~:text=A%20return%20statement%20ends%20the,special%20value%20None%20is%20returned. Good Websites you can use: Official website: https://www.python.org/ W3 Schools: https://www.w3schools.com/python/default.asp https://www.tutorialspoint.com/python/index.htm https://realpython.com/ Interactive: Goodbye World!: End Well, I guess this is the end. I hope y'all have learnt something new/interesting! If you have any questions, please comment and I will try my best to answer them. Have a super day everyone! PS: 6 FEET APART!!! My beautiful ASCII art: @ChezCoder Ok Imma check it out. btw, you say you were warned for advertising when you mentioned your projects on a post; was it your post or someone else's post? if it wasnt your post, then yea I guess that would be advertsising. also popularity on repl.it doesnt really matter, its mostly how you code ;)
Fibonacci in Python Published on 13 October 2020 (Updated: 13 October 2020) In this article, we will see how to implement Fibonacci sequence using Python. How to Implement Solution Let’s take a look on code for fibonacci.py: import sys def fibonacci(n): fib = fibs() for i in range(1, n + 1): print(f'{i}: {next(fib)}') def fibs(): first = 1 second = 1 yield first yield second while True: new = first + second yield new first = second second = new def main(args): try: fibonacci(int(args[0])) except (IndexError, ValueError): print("Usage: please input the count of fibonacci numbers to output") sys.exit(1) if __name__ == "__main__": main(sys.argv[1:]) Now, we will consider this code block by block on the order of execution. if __name__ == "__main__": main(sys.argv[1:]) This code checks if the main module is run. If it is, it then passes control to main function passing argument string provided by the user. def main(args): try: fibonacci(int(args[0])) except (IndexError, ValueError): print("Usage: please input the count of fibonacci numbers to output") sys.exit(1) This main function was invoked earlier with argument string. Next line invokes fibonacci() function. If the function raises IndexError(Raised when a sequence subscript is out of range) or ValueError, it prints correct usage pattern. And program exits with exit status 1 which specifies abnormal termination. def fibonacci(n): fib = fibs() for i in range(1, n + 1): print(f'{i}: {next(fib)}') def fibs(): first = 1 second = 1 yield first yield second while True: new = first + second yield new first = second second = new In fibonacci() function, function fibs() is called. In fibs(), yield function returns generator which iterates to store and get values. Value of first and second are initially stored in generator as 1 and 2. In the while loop, values of fibonacci sequence is added using rule third_num = first_num + second_num. Control goes back to fibonacci() which prints values returned by next() which returns next item in iterator. This sequence is repeated till the user specified input times. How to Run Solution If we want to run this program, we should probably download a copy of Fibonacci in Python. After that, we should make sure we have the latest Python interpreter. From there, we can simply run the following command in the terminal: python fibonacci.py Alternatively, we can copy the solution into an online Python interpreter and hit run. Further Reading Hello World in Python on sample-programs
前提・実現したいこと プログラムを実行し、右クリックで範囲を選択し、左ダブルクリックで範囲決定すると選択した範囲が赤色になります。これを二回繰り返し、二つ範囲を選択した後に、tkinterの「一回目の範囲選択やり直し」を押して一回目に選択した範囲の赤色を消し、「二回目の範囲選択やり直しと線を消す」を押して二回目に選択した範囲の赤色を消すと同時に元から引いてあった直線を消したいと考えています。 発生している問題 ボタンを押したらhikinaoshiを実行し、赤い線の上から青い線を引いて赤を消すというやり方を試しましたが、一番最後に選択した範囲にしか適用されず、一回目に選択した範囲、二回目に選択した範囲とどのように分ければ良いのかと、引いている線の消し方がわからず、悩んでいます。アドバイスを頂けたら嬉しいです。よろしくお願いします。 該当のソースコード import numpy as np import matplotlib.pyplot as plt import tkinter as tk root = tk.Tk() root.geometry('300x200') root.title('tkinter') def oncmask(event): global stat global leftind, rightind ind=np.searchsorted(xdata,event.xdata) plt.title("You clicked index="+str(ind)) if event.button==3 and stat==1: leftind=ind ax.plot([xdata[ind]],[ydata[ind]],".",color="red") stat=2 elif event.button==3 and stat==2: rightind=ind ax.plot(xdata[leftind:rightind],ydata[leftind:rightind],color="red") stat=3 elif event.button==1 and event.dblclick==1 and stat==3: plt.title("Approved") mask[leftind:rightind]=False stat=1 fig.canvas.draw() def hikinaoshi(): ax.plot(xdata[leftind:rightind],ydata[leftind:rightind],color="blue") fig.canvas.draw() def f(x): temp =[] for index in x: temp.append(max(5.*index+10.,-3.*index + 10)) return temp xdata = np.linspace(-10, 10, num=201) ydata = f(xdata)+ 5.*np.random.randn(xdata.size) button1 = tk.Button(root, text="一回目の範囲選択やり直し") button1.place(x=70,y=50) # button1["command"] = hikinaoshi button2 = tk.Button(root, text="二回目の範囲選択やり直しと線を消す") button2.place(x=70,y=100) button2["command"] = hikinaoshi mask=np.ones(len(xdata),dtype=bool) stat = 1 fig=plt.figure() ax=fig.add_subplot(111) ax.plot(xdata,ydata) cid = fig.canvas.mpl_connect('button_press_event', oncmask) a_fit = 5 b_fit = 8 plt.plot(xdata,a_fit*xdata+b_fit,'k-', label='fitted line', linewidth=3, alpha=0.3) plt.show() root.mainloop() 気になる質問をクリップする クリップした質問は、後からいつでもマイページで確認できます。 またクリップした質問に回答があった際、通知やメールを受け取ることができます。 クリップを取り消します 良い質問の評価を上げる 以下のような質問は評価を上げましょう 質問内容が明確 自分も答えを知りたい 質問者以外のユーザにも役立つ 評価が高い質問は、TOPページの「注目」タブのフィードに表示されやすくなります。 質問の評価を上げたことを取り消します 評価を下げられる数の上限に達しました 評価を下げることができません 1日5回まで評価を下げられます 1日に1ユーザに対して2回まで評価を下げられます 質問の評価を下げる teratailでは下記のような質問を「具体的に困っていることがない質問」、「サイトポリシーに違反する質問」と定義し、推奨していません。 プログラミングに関係のない質問 やってほしいことだけを記載した丸投げの質問 問題・課題が含まれていない質問 意図的に内容が抹消された質問 過去に投稿した質問と同じ内容の質問 広告と受け取られるような投稿 評価が下がると、TOPページの「アクティブ」「注目」タブのフィードに表示されにくくなります。 質問の評価を下げたことを取り消します この機能は開放されていません 評価を下げる条件を満たしてません 15分調べてもわからないことは、teratailで質問しよう! ただいまの回答率 88.34% 質問をまとめることで、思考を整理して素早く解決 テンプレート機能で、簡単に質問をまとめられる
Performance testing HTTP services with httperf At Vinco Orbis we recently build a web service that in turn queries some other services through HTTP and returns a list of those for which the query was successful. It looked something like this: import requests successful = [] for service in get_services(): r = requests.get(service.url) if r.status_code == 200: successful.append(service.name) print successful As you might suppose, making a bunch of HTTP requests in a loop wasn’t exactly speedy, so I set out to improve the performance by making this process asynchronous. Of course, you can’t improve what you can’t measure, so first I had to find out exactly how slow this was. For that I used a little tool by HP called httperf, which can make a large number of HTTP requests and report on their performance. For instance, if you run it like this: $ httperf --hog --server=server.com --uri=/some-url --timeout=10 --num-conns=500 --rate=50 It will perform up to 50 requests per second to http://server.com/some-url, up to a total of 500, waiting up to 10 seconds for each request to complete. It has support for setting request headers, HTTPS, testing whole sequences of requests in a session and even cookie management, so you can test whole HTTP flows. The service I needed to test requires some POST parameters. Googling for how to specify them led me to this gist which uses the session execution feature of httperf. First I created the session specification file httperf_content: / method=POST contents="param1=abc&param2=def" And I executed it 150 times, with no time between session steps: $ httperf --hog --server 127.0.0.1 --port 5000 --add-header="Content-Type: application/x-www-form-urlencoded\n" --wsesslog=150,0,httperf_content Which reported the following results: Total: connections 150 requests 150 replies 150 test-duration 470.479 s Connection rate: 0.3 conn/s (3136.5 ms/conn, <=2 concurrent connections) Connection time [ms]: min 524.7 avg 3136.5 max 15894.0 median 2735.5 stddev 2074.0 Connection time [ms]: connect 0.2 Connection length [replies/conn]: 1.000 Request rate: 0.3 req/s (3136.5 ms/req) Request size [B]: 163.0 Reply rate [replies/s]: min 0.0 avg 0.3 max 1.0 stddev 0.2 (94 samples) Reply time [ms]: response 3136.1 transfer 0.3 Reply size [B]: header 145.0 content 164.0 footer 0.0 (total 309.0) Reply status: 1xx=0 2xx=149 3xx=0 4xx=0 5xx=1 CPU time [s]: user 67.66 system 399.51 (user 14.4% system 84.9% total 99.3%) Net I/O: 0.1 KB/s (0.0*10^6 bps) Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 Session rate [sess/s]: min 0.00 avg 0.32 max 1.00 stddev 0.21 (150/150) Session: avg 1.00 connections/session Session lifetime [s]: 3.1 Session failtime [s]: 0.0 Session length histogram: 0 150 On line 8 we can see that the server can answer an average of 0.3 requests per second, each one taking 3136.5 ms. On line 14 we can see a breakdown of the status codes returned by the service; apparently there was 149 200 OK responses and one 500 error. I made my code asynchronous with requests_futures; a simple wrapper around Kenneth Reitz’s requests library that combines it with Python 3.2+ futures implementation. The futures library has been backported to Python 2.7, so even if you are still using Python 2 you can check it out. import concurrent.futures from concurrent.futures import ThreadPoolExecutor from requests_futures.sessions import FuturesSession successful = [] session = FuturesSession(executor=ThreadPoolExecutor(max_workers=4)) futures = {} for service in get_services(): future = session.get(service.url) futures[future] = service.name for future in concurrent.futures.as_completed(futures): r = future.result() if r.status_code == 200: successful.append(futures[future]) print successful And, according to httperf, it performs much better: Request rate: 0.8 req/s (1248.2 ms/req) I tried several values for max_workers, but after 4 increments in performance were negligible. I also tried using stream=True in my requests, so that my service only read the headers and didn’t bother with the body of the request, but for this services the request body is minimal and it didn’t make much difference. So, if you are building a web service and you are worried about performance, you can test your optimization ideas and then use this tool to see what works.
Why this error in cvtColor function happens? Why I have error in python script? I have python 2.6.6 installed on Centos 6.4 x64 [root@--- ~]# python test.py 1.mp4 OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /root/opencv-2.4.4/modules/imgproc/src/color.cpp, line 3326 Traceback (most recent call last): File "test.py", line 231, in <module> print process(fileName) File "test.py", line 198, in process detector = Detector(frame) File "test.py", line 127, in __init__ self._initAvg(frame) File "test.py", line 171, in _initAvg f = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) cv2.error: /root/opencv-2.4.4/modules/imgproc/src/color.cpp:3326: error: (-215) scn == 3 || scn == 4 in function cvtColor
create matrix as having a subset of columns from another matrix I need to get a new matrix generated by selecting a subset of columns from another matrix, given a list (or tuple) of column indices. The following is the code I am working on (there is a bit more than just the attempt to create a new matrix, but might be interesting for you to have some context). A = matrix(QQ,[ [2,1,4,-1,2], [1,-1,5,1,1], [-1,2,-7,0,1], [2,-1,8,-1,2] ]) print "A\n",A print "A rref\n",A.rref() p = A.pivots() print "A pivots",p with the following output: A [ 2 1 4 -1 2] [ 1 -1 5 1 1] [-1 2 -7 0 1] [ 2 -1 8 -1 2] A rref [ 1 0 3 0 0] [ 0 1 -2 0 0] [ 0 0 0 1 0] [ 0 0 0 0 1] A pivots (0, 1, 3, 4) Now I expected to find easily a method from matrix objects which allowed to construct a new matrix with a subset of columns by just giving the tuple p as parameter, but could not find anything like that. Any ideas on how to solve this elegantly in a sage-friendly way? (avoiding for loops and excess code) thanks!
学习了Flask-SQLAlchemy下MySQL的配置和一些简单的使用,现在记录下来,供自己以及和我一样的初学者一个参考。 一、当然是把必备的包给安装上才行: Flask-SQLAlchemy pip install flask-sqlalchemy MySQL windows下64位压缩包的安装方式可以参考: http://blog.csdn.net/werewolf_st/article/details/45932771 还有就是直接去下载安装包即可 Flask-MySQLdb pip install flask-mysqldb 二、配置flask-sqlalchemy连接MySQL数据库 from flask import Flask from flask.ext.sqlalchemy import SQLALchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://root@localhost:3306/test?charset=utf8mb4' app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN'] = True 这样,就配置成功了. SQLALCHEMY_DATABASE_URI 配置使用的数据库URL,而配置MySQL的URL格式为: mysql://username:password@hostname/database 上边儿的配置是使用MySQL的默认用户,并且没有设置密码,然后连接到本地主机(localhost:3306); database是要使用的数据库名,在这个程序中我们指定使用test数据库。 三、定义模型,创建数据库表 from flask import Flask from flask.ext.script import Manager from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://root@localhost:3306/test?charset=utf8mb4' app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN'] = True db = SQLAlchemy(app) manager = Manager(app) class User(db.Model): __tablename__ = 'users' id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(320), unique=True) password = db.Column(db.String(32), nullable=False) def __repr__(self): return '<User %r>' % self.username if __name__ == '__main__': manager.run() 四、相关操作 先将上边儿的代码保存到桌面,并命名为hello.py,然后,我们就在数据python shell中来完成相应的操作吧。 首先打开cmd执行命令: python C:\Users\st\Desktop\hello.py shell 打开如下的python shell环境: 接下来,我们就来看看如何操作MySQL数据库: 在操作之前先将flask-sqlalchemy的类实例导入:(在上边儿的代码中,我们已经将sqlalchemy的类实例为db,所以我们执行以下操作将其导入: 1、创建表 2、删除表 3、插入数据 4、查询数据 (1)filter_by查询(精确查询): ’ (2)get(主键):(id一般为主键) (3)filter查询(模糊查询): (4)逻辑非查询: 或 (5)逻辑与 (6)逻辑或 (7)first()返回查询到的第一个对象 (8)all()返回查询到的所有对象 5、删除数据 6、更新数据 总算整理完了…睡觉觉~~~
Over the years we've used more and more threads in OpenLP for running things like web servers, downloads and other things that need to happen concurrently with the UI. Unfortunately because everything has been in bits and in various places, we've never had a unified way to run and keep track of threads. Thankfully we've largely not run into many issues, but in the lead up to version 3.0 we did start running into segfaults not just when OpenLP stopped, but also preventing OpenLP from starting. New Threading API When you want to create a thread, use this very simple thread API to ensure OpenLP knows about your thread. Creating a thread: run_thread() from openlp.core.threading import ThreadWorker, run_thread class Worker(ThreadWorker): def start(self): """ Do your stuff in here, then emit the quit signal. """ self.server = Server() self.server.run() self.quit.emit() def stop(self): """ If your thread is long-running, this is how OpenLP will stop it if it is still running when the user exits OpenLP. """ self.server.stop() class MyClass(object): def run_server(self): """ Run the server in a thread """ worker = Worker() run_thread(worker, 'my_server') Fetch the worker: get_thread_worker() from openlp.core.threading import get_thread_worker def stop_server(): """ Stop the server """ worker = get_thread_worker('my_server') worker.stop() Check if thread is finished: is_thread_finished() from openlp.core.threading import is_thread_finished def monitor_server(): """ Check if the server is still running """ if is_thread_finished('my_server'): print('Server is stopped, uhoh!') else: print('Server is still running, all is good.')
Description Given a non-empty, singly linked list with head node head, return a middle node of linked list. If there are two middle nodes, return the second middle node. Example 1: Input:[1,2,3,4,5]Output:Node 3 from this list (Serialization: [3,4,5]) The returned node has value 3. (The judge's serialization of this node is [3,4,5]). Note that we returned a ListNode object ans, such that: ans.val = 3, ans.next.val = 4, ans.next.next.val = 5, and ans.next.next.next = NULL. Example 2: Input:[1,2,3,4,5,6]Output:Node 4 from this list (Serialization: [4,5,6]) Since the list has two middle nodes with values 3 and 4, we return the second one. Note: The number of nodes in the given list will be between 1and100. Explanation fast pointer to find the middle number. slow pointer to get to the middle node. Python Solution # Definition for singly-linked list. # class ListNode: # def __init__(self, x): # self.val = x # self.next = None class Solution: def middleNode(self, head: ListNode) -> ListNode: if head == None: return head count = 0 fast = head while fast != None: fast = fast.next count += 1 mid = count // 2 + 1 slow = head for i in range(1, mid): slow = slow.next return slow Time complexity: O(n). Space complexity: O(1).
This post assumes you know what post-types, custom fields, and taxonomies are. Google them for more knowledge, and read the first article in this series. I tried to make this post as friendly as possible with front end developers in mind. So forgive the fact that I explain a bit too much and perhaps a bit too slowly. Custom Fields Custom fields can be fun. As mentioned in the first article, we use these for arbitrary data that we would like to attach to our post. Thus we can grab this data either in our editor or more usefully for me, in our php templates. So without Pods, we would be left having to manually add custom fields to each post under the post editor. We would have to add them each time we make a new post. And the only type of input it takes is a basic text field. 🙁 What if I wanted a checkbox? A date? A currency field that only accepts numbers! Pods Custom Fields is here to save us! Custom fields using pods You can edit a Pod at any time by going to ‘Pods Admin’>Edit Pods’. In the new Pod we created, under “manage fields” we can add custom fields to show up on every new post we create under portfolio! We can choose what kind of input it takes and we can even give it a default value, should we need to. Let’s Do it! I’ll start with basic fields and work our way up. You can change the field type in the dropdown you see after pressing “add Field”. I will be changing this field over and over to show you how it works. You will also have to go the post you created and update it after changing the field type to see it change properly. Text Fields In the ‘Field Type’ dropdown, ‘Text’ is the more basic type of field. The first one being the most simple, ‘Plain Text’ is literally the same as if you used WordPress without Pods. Except the field name would always show up nicely when creating the new post, and you can give it a default value among other things, like forcing it to be required before the post can be published. Let’s create one called “my awesome field“. Press tab, and the ‘Name’ field will be filled with ‘my_awesome_field’. That is the handle for us to use in our theme files. Again I would prefix this. So mine would be ‘snp_my_awesome_field’, snp standing for saltnpixels. Now if we save the pod and go back out and make a new post under portfolio, here is what we would see! Pay no attention to the fact that when I took the screenshot, the highlight seems to be ‘Horizontal Line’. I guess no one gives it enough attention so it tried stealing the show. The real highlight is the custom fields on ‘More Fields’ at the bottom. Notice the field under the editor already there and ready for us to fill out. Awesome! Now the other fields under ‘Text’ are the same except they will force the value to conform on save. For instance ‘Website’ will force ‘example.com’ into ‘https://example.com’. Email will force an email…. You get the picture right? Outputting Fields Now that we have made a custom field, how can we show this data in our theme? Assuming you are in the WP loop, you can easily use: <?php //inside loop the_content(); //and we add this: echo get_post_meta(get_the_ID(), 'snp_my_awesome_field', true); //actually if using the website field you would write echo esc_url( get_post_meta(get_the_ID(), 'snp_my_awesome_field', true) ); //always escape your outputs. Learn more here: //httpss://codex.wordpress.org/Validating_Sanitizing_and_Escaping_User_Data Lets go through some other fields. Paragraph Fields Very similar to text fields. Here I would choose ‘additional options’ and make some choices on some stuff. It is quite self-explanatory, and the question marks can help. Date/ Time Fields These are quite simple and straight forward. They also work the same way as the ones above. On your edit-post page, they will have a dropdown using jQuery ui for a datepicker or time, and on output it will show the format you chose under ‘additional fields’. The Other Fields Ok, I’m jumping down to the last set of fields called ‘Other’.Because these are the next easiest. They work the same way. Color Picker seems to only allow Hex values and no transparency. Don’t know why… And the ‘ Yes/No’field creates a checkbox option. Here is how you can output these: //Color Picker use: //returns a hex value echo get_post_meta(get_the_ID(), 'snp_my_awesome_field', true); //typical use <a style="color: <?php echo get_post_meta(get_the_ID(), 'snp_my_awesome_field', true); ?> ">Link //Checkbox use: //checkbox is usually used in a conditional if(get_post_meta(get_the_ID(), 'snp_my_awesome_field', true) { //its been checked } else { //its not checked } Numbers / Relationships / Media Now things are gonna change a bit. It’s quite easy to see what number, currency and media do. And you can output a number or currency using get_post_meta() and it will show you the number you put in it. But it wont be formatted in any way. It will just be a text of numbers. If you chose number and want a comma to show up on the front end like ‘1,234’ you won’t get that automatically. And if you chose currency and are wondering where the dollar sign is, or where some decimal points are, it ain’t happening. With Media it will output ‘Array’, and Relationship will do that too. We will get to relationships soon… Hopefully. But all this brings us deeper into using more of Pods. To get these fields showing properly we will need to learn about display() and field(). While the other fields can use display() or field(), I am so used to using get_post_meta(). However for Numbers/ Relationships/ Media it is easier to use some of pods methods. Getting A Pod and its methods display() and field() are not functions. They are methods and, as such, they must be attached to an object. We must create a Pod object that points to our post and then pull out the custom fields for that post. It’s easy, don’t worry. To do that we use pods() which needs a post type and an id. Pods(): This gets a pod object. It can get a whole pod, which means all the posts in a post type. Or it can get just one. It depends on what you give it. Once you get it, you can go through all the pod items (posts), or if you only get one you can work on that one post. Display(): Once you have the pod item saved in a variable we can use display() on it and get some custom fields to show. This shows the field as you would usually want the user to see it. All nice and formatted. //get our post pods object in the loop, store it in $portfolio_item $portfolio_item = pods('name_of_pod', 'get_the_ID() ); echo $portfolio_item->display('snp_my_awesome_field'); This will get your custom field properly. If it’s a number, it will output with a comma or any formatting you chose in pods admin. Currency will show up properly, and Media will output the url to the file. You can use that inside an img tag and it will show up. display() is usually found with some echo. Field(): field(), I find, is more for getting raw information to manipulate before echoing it. It is, in a way very similar to get_post_meta() and in a way not. For instance, its not similar when using a date/time field, which will show up the same using display() and get_post_meta(), but not field(). On the other hand, like get_post_meta(), field() will also echo an array if you try it on media or relationship fields, and on numbers it would also be unformatted. Here is the same scenario as above with display(), except this time we will try to use field(). Assuming our field is now a media field. //get our post pods object in the loop, store it in $portfolio_item $portfolio_item = pods('name_of_pod', 'get_the_ID() ); echo $portfolio_item->field('snp_my_awesome_field'); //if snp_my_awesome_field was a media file or a relationship it would //return an Array. And as for numbers there would be no decimals. Troubleshooting that Array Now it would seem field() is not useful here, but if you would var_dump() the custom field you would see what it was made of, which can help us understand how it works. (consequently you can do the same with get_post_meta() ) var_dump( $portfolio_item->field('snp_my_awesome_field') ); //the same would be output on var_dump( get_post_meta(get_the_ID, 'snp_my_awesome_field', true) ); Echoing A Piece Of The Array You will notice a long array, but towards the bottom you will see it has the actual file path, which is what we want and it’s inside the key called [‘guid’]. We can grab this and output it easily in a few interesting ways. //my favorite echo $portfolio_item->field('snp_my_awesome_field.guid'); //interesting… didn't know this worked until just now! echo get_post_meta(get_the_ID(), 'snp_my_awesome_field.guid', true); //and of course, I figured echo get_post_meta(get_the_ID(), 'snp_my_awesome_field', true)['guid']; so that is how display(), field() and, get_post_meta() interact with custom fields of pods. If one doesn’t work, try the other! Remember display() and field() need you to get the pod item first using pods(). field() really comes in handy when dealing with relationship fields. I have not really discussed relationship fields and I will have to make a separate post for that because, well, it’s a subject on its own really. Hopefully that will be next.
TIOBE每个月都会新鲜出炉一份流行编程语言排行榜,这里会列出最流行的20种语言。排序说明不了语言的好坏,反应的不过是某个软件开发领域的热门程度。语言的发展不是越来越common,而是越来越专注领域。有的语言专注于简单高效,比如Python,内建的list,dict结构比c/c++易用太多,但同样为了安全、易用,语言也牺牲了部分性能。在有些领域,比如通信,性能很关键,但并不意味这个领域的coder只能苦苦挣扎于c/c++的陷阱中,比如可以使用多种语言混合编程。 我看到的一个很好的Python与c/c++混合编程的应用是NS3(Network Simulator3)一款网络模拟软件,它的内部计算引擎需要用高性能,但在用户建模部分需要灵活易用。NS3的选择是使用C/C++来模拟核心部件和协议,用python来建模和扩展。 这篇文章介绍python和c/c++三种混合编程的方法,并对性能加以分析。 混合编程的原理 首先要说一下python只是一个语言规范,实际上python有很多实现:CPython是标准Python,是由C编写的,python脚本被编译成CPython字节码,然后由虚拟机解释执行,垃圾回收使用引用计数,我们谈与C/C++混合编程实际指的是基于CPython解释上的。除此之外,还有Jython、IronPython、PyPy、Pyston,Jython是Java编写的,使用JVM的垃圾回收,可以与Java混合编程,IronPython面向.NET平台。 python与C/C++混合编程的本质是python调用C/C++编译的动态链接库,关键就是把python中的数据类型转换成c/c++中的数据类型,给编译函数处理,然后返回参数再转换成python中的数据类型。 python中使用ctypes moduel,将python类型转成c/c++类型 首先,编写一段累加数值的c代码: extern "C" { int addBuf(char* data, int num, char* outData); } int addBuf(char* data, int num, char* outData) { for (int i = 0; i < num; ++i) { outData[i] = data[i] + 3; } return num; } 然后,将上面的代码编译成so库,使用下面的编译指令 >gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC addbuf.c -o addbuf.o 最后编写python代码,使用ctypes库,将python类型转换成c语言需要的类型,然后传参调用so库函数: from ctypes import * # cdll, c_int lib = cdll.LoadLibrary('libmathBuf.so') callAddBuf = lib.addBuf num = 4 numbytes = c_int(num) data_in = (c_byte * num)() for i in range(num): data_in[i] = i data_out = (c_byte * num)() ret = lib.addBuf(data_in, numbytes, data_out) #调用so库中的函数 在C/C++程序中使用Python.h,写wrap包装接口 这种方法需要修改c/c++代码,在外部函数中处理入/出参,适配python的参数。写一段c代码将外部入参作为shell命令执行: #include <Python.h> static PyObject* SpamError; static PyObject* spam_system(PyObject* self, PyObject* args) { const char* command; int sts; if (!PyArg_ParseTuple(args, "s", &command)) //将args参数按照string类型处理,给command赋值 return NULL; sts = system(command); //调用系统命令 if (sts < 0) { PyErr_SetString(SpamError, "System command failed"); return NULL; } return PyLong_FromLong(sts); //将返回结果转换为PyObject类型 } //方法表 static PyMethodDef SpamMethods[] = { {"system", spam_system, METH_VARARGS, "Execute a shell command."}, {NULL, NULL, 0, NULL} }; //模块初始化函数 PyMODINIT_FUNC initspam(void) { PyObject* m; //m = PyModule_Create(&spammodule); // v3.4 m = Py_InitModule("spam", SpamMethods); if (m == NULL) return; SpamError = PyErr_NewException("spam.error",NULL,NULL); Py_INCREF(SpamError); PyModule_AddObject(m,"error",SpamError); } 处理上所有的入参、出参都作为PyObject对象来处理,然后使用转换函数把python的数据类型转换成c/c++中的类型,返回参数按相同方式处理。比第一种方法多了初始化函数,这部分是把编译的so库当做python module所必需要做的。 python这样使用: imoprt spam spam.system("ls") 使用c/c++编写python扩展可以参见:http://docs.python.org/2.7/extending/extending.html 使用SWIG,来生成独立的wrap文件 这种方式并不能算是一种新方式,实际上是基于第二中方式的一种包装。SWIG是个帮助使用C或者C++编写的软件能与其它各种高级编程语言进行嵌入联接的开发工具。SWIG能应用于各种不同类型的语言包括常用脚本编译语言例如Perl, PHP, Python, Tcl, Ruby, PHP,C#,Java,R等。 操作上,是针对c/c++程序编写独立的接口声明文件(通常很简单),swig会分析c/c++源程序自动分析接口要如何包装。在指定目标语言后,swig会生成额外的包装源码文件。编译so库时,把包装文件一起编译、连接即可。看个c代码例子: int system(const char* command) { sts = system(command); if (sts < 0) { return NULL; } return sts; } c源码中去掉适配python的包装,仅定义system函数本身,这比第二种方式简洁很多,并且剔除了c代码与python的耦合代码,是c代码通用性更好。 然后编写swig接口声明文件spam.i: %module spam %{ #include "spam.h" %} %include "spam.h" %include "typemaps.i" int system(const char* INPUT); 这是一段语言无关的模块声明,要创建一个叫spam的模块,对system做一个声明,主要是声明参数作为入参使用。然后执行swig编译程序: >swig -c++ -python spam.i swig会生成spam_wrap.cxx和spam.py两个文件。先看spam_wrap.cxx,这个生成的文件很长,但关键的就是对函数的包装: 包装函数传入的还是PyObejct对象,内部进行了类型转换,最终调了源码中的system函数。 生成的了另一个spam.py实际上是对so库又用python包装了一层(实际比较多余): 这里使用_spam模块,这里实际上是把扩展命名为了_spam。关于swig在python上的应用可以参见:http://www.swig.org/Doc1.3/Python.html 下面就是编译和安装python 模块,Python提供了distutils module,可以很方便的编译安装python的module。像下面这样写一个安装脚本setup.py: 执行 python setup.py build,即可以完成编译,程序会创建一个build目录,下面有编译好的so库。so库放在当前目录下,其实Python就可以通过import来加载模块了。当然也可以用 python setup.py install 把模块安装到语言的扩展库——site-packages目录中。关于build python扩展,可以参考https://docs.python.org/2/extending/building.html#building 混合编程性能分析 混合编程的使用场景中,很重要一个就是性能攸关。那么这小节将通过几个小实验验证下混合编程的性能如何,或者说怎样写程序能发挥好混合编程的性能优势。 我们使用冒泡排序算法来验证性能。 1)实验一 使用冒泡程序验证python和c/c++程序的性能差距 python版冒泡程序: def bubble(arr,length): j = length - 1 while j >= 0: i = 0 while i < j: if arr[i] > arr[i+1]: tmp = arr[i+1] arr[i+1] = arr[i] arr[i] = tmp i += 1 j -= 1 c语言版冒泡排序 void bubble(int* arr,int length){ int j = length - 1; int i; int tmp; while(j >= 0){ i = 0; while(i < j){ if(arr[i] > arr[i+1]){ tmp = arr[i+1]; arr[i+1] = arr[i]; arr[i] = tmp; } i += 1; } j -= 1; } } 使用一个长度为100内容固定的数组,反复排序10000次(每次排序后,再把数组恢复成原始序列),记录执行时间: 在相同的机器上多次执行,Python版执行时间是10.3s左右,而c语言版本(未使用任何优化编译参数)执行时间只有0.29s左右。相比之下python的性能的确差很多(主要是python中list的操作跟c的数组相比,效率差非常多),但python中很多扩展都是c语言写的,目的就是为了提升效率,python用于数据分析的numpy库就拥有不错的性能。下个实验就验证,如果python使用c语言版本的冒泡排序扩展库,性能会提升多少。 2)实验二 python语言使用ctypes方式调用 这里直接使用c_int来定义了数组对象,这也节省了调用时数据类型转换的开销: import time from ctypes import * IntArray100 = c_int * 100 arr = IntArray100(87,23,41, 3, 2, 9,10,23,0,21,5,15,93, 6,19,24,18,56,11,80,34, 5,98,33,11,25,99,44,33,78, 52,31,77, 5,22,47,87,67,46,83, 89,72,34,69, 4,67,97,83,23,47, 69, 8, 9,90,20,58,20,13,61,99,7,22,55,11,30,56,87,29,92,67, 99,16,14,51,66,88,24,31,23,42,76,37,82,10, 8, 9, 2,17,84,32,66,77,32,17, 5,68,86,22, 1, 0) ... ... if __name__ == "__main__": libbubble = CDLL('libbubble.so') time1 = time.time() for i in xrange(100000): libbubble.initArr(arr1,arr,100) libbubble.bubble(arr1,100) time2 = time.time() print time2 - time1 再次执行: 为了减少误差,把循环增加到10万次,结果c原生程序使用优化参数编译后用时0.65s左右。python使用c扩展后(相同编译参数)执行仅需2.3s左右。 3)实验三 在c语言中使用PyObject处理入参 这种方式是在python中依然使用list装入待排序数列,在c函数中把list赋值给数组,再进行排序,排好序后,再对原始list赋值。循环排序10万次,执行用时1.0s左右。 4) 实验四 使用swig来包装c方法 在接口文件中声明%array_class(int,intArray);然后在Python中使用initArray来作为数组,同样修改成10万次排序。python版本的程序(相同编译参数)执行仅需0.7s左右,比c原生程序慢大概7%。 结论 1.python 的list效率非常低,在高性能场景下避免对list大量循环、取值、赋值操作。��需要最好使用ctype中的数组,或者是用c语言来实现。 2.应该把耗时的cpu密集型的逻辑交给c/c++实现,python使用扩展即可。 Linux公社的RSS地址:https://www.linuxidc.com/rssFeed.aspx
יוני 24, 2019 — A guest article posted by the VSCO Engineering team At VSCO, we build creative tools, spaces, and connections driven by self-expression. Our app is a place where people can create and edit images and videos, discover tips and new ideas, and connect to a vibrant global community void of public likes, comments, and follower counts. We use machine learning as a tool for personalizing and guiding eac… A photo of a building before any filter is applied. Image by Sarah Hollander (left) The photo of the building with the AU5 preset applied. Image by Sarah Hollander (right) Related Images Just For You Search User Suggestions Categorizing an image graph_def_file = “model_name.pb” input_arrays = [“input_tensor_name”] # this array can have more than one input name if the model requires multiple inputs output_arrays = [“output_tensor_name”] # this array can have more than one input name if the model has multiple outputs converter = lite.TFLiteConverter.from_frozen_graph( graph_def_file, input_arrays, output_arrays) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) Once we had a model that could assign categories to images, we were able to bundle it into our app and run inference on images with it using ML Kit. Since we were using our own custom trained model, we used Custom Model API from ML Kit. For better accuracy, we decided to forgo the quantization step in model conversion and decided to use a floating point model in ML Kit. There were some challenges here because ML Kit by default assumes a quantized model. However, with not much effort, we were able to change some of the steps in model initialization to support a floating point model. // create a model interpreter for local model (bundled with app) FirebaseModelOptions modelOptions = new FirebaseModelOptions.Builder() .setLocalModelName(“model_name”) .build(); modelInterpreter = FirebaseModelInterpreter.getInstance(modelOptions); // specify input output details for the model // SqueezeNet architecture uses 227 x 227 image as input modelInputOutputOptions = new FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 227, 227, 3}) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, numLabels}) .build(); // create input data FirebaseModelInputs input = new FirebaseModelInputs.Builder().add(imgDataArray).build(); // imgDataArray is a float[][][][] array of (1, 227, 227, 3) // run inference modelInterpreter.run(input, modelInputOutputOptions); Suggesting presets for an image יוני 24, 2019 — A guest article posted by the VSCO Engineering team At VSCO, we build creative tools, spaces, and connections driven by self-expression. Our app is a place where people can create and edit images and videos, discover tips and new ideas, and connect to a vibrant global community void of public likes, comments, and follower counts. We use machine learning as a tool for personalizing and guiding eac…
View source on GitHub In TensorFlow 2.0, iterating over a TensorShape instance returns values. tf.compat.v1.enable_v2_tensorshape() This enables the new behavior. Concretely, tensor_shape[i] returned a Dimension instance in V1, butit V2 it returns either an integer, or None. Examples: ####################### # If you had this in V1: value = tensor_shape[i].value # Do this in V2 instead: value = tensor_shape[i] ####################### # If you had this in V1: for dim in tensor_shape: value = dim.value print(value) # Do this in V2 instead: for value in tensor_shape: print(value) ####################### # If you had this in V1: dim = tensor_shape[i] dim.assert_is_compatible_with(other_shape) # or using any other shape method # Do this in V2 instead: if tensor_shape.rank is None: dim = Dimension(None) else: dim = tensor_shape.dims[i] dim.assert_is_compatible_with(other_shape) # or using any other shape method # The V2 suggestion above is more explicit, which will save you from # the following trap (present in V1): # you might do in-place modifications to `dim` and expect them to be reflected # in `tensor_shape[i]`, but they would not be.
Sentence Embeddings Models trained on Paraphrases This model is from the sentence-transformers-repository. It was trained on SNLI + MultiNLI and on STS benchmark dataset. Further details on SBERT can be found in the paper: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks This model is multilingual version, it was trained on parallel data for 50+ languages. For more details, see: SBERT.net - Pretrained Models Usage (HuggingFace Models Repository) You can use the model directly from the model repository to compute sentence embeddings: from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) return sum_embeddings / sum_mask #Sentences we want sentence embeddings for sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] #Load AutoModel from huggingface model repository tokenizer = AutoTokenizer.from_pretrained("model_name") model = AutoModel.from_pretrained("model_name") #Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt') #Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) #Perform pooling. In this case, mean pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) Usage (Sentence-Transformers) Using this model becomes more convenient when you have sentence-transformers installed: pip install -U sentence-transformers Then you can use the model like this: from sentence_transformers import SentenceTransformer model = SentenceTransformer('model_name') sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] sentence_embeddings = model.encode(sentences) print("Sentence embeddings:") print(sentence_embeddings) Citing & Authors If you find this model helpful, feel free to cite our publication Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation: @inproceedings{reimers-2020-multilingual-sentence-bert, title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2020", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2004.09813", } Downloads last month 0 Unable to determine this model’s pipeline type. Check the docs .
JSON fields in the request body: Fields Type Description url String URL of input image. This API is currently available in: Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes. Select the testing console in the region where you created your resource: West US West US 2 East US East US 2 West Central US South Central US West Europe North Europe Southeast Asia East Asia Australia East Brazil South Canada Central Central India UK South Japan East Central US France Central Korea Central Japan West North Central US South Africa North UAE North Return faceIds of the detected faces or not. The default value is true. Return face landmarks of the detected faces or not. The default value is false. Analyze and return the one or more specified face attributes in the comma-separated string like "returnFaceAttributes=age,gender". Supported face attributes include age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accessories, blur, exposure and noise. Face attribute analysis has additional computational and time cost. The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include "recognition_01", "recognition_02" and "recognition_03". The default value is "recognition_01". "recognition_03" is recommended since its overall accuracy is improved compared with "recognition_01" and "recognition_02". Return 'recognitionModel' or not. The default value is false. The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include "detection_01" or "detection_02". The default value is "detection_01". The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). JSON fields in the request body: Fields Type Description url String URL of input image. { "url": "http://example.com/1.jpg" } [binary data] A successful call returns an array of face entries ranked by face rectangle size in descending order. An empty response indicates no faces detected. A face entry may contain the following values depending on input parameters: Fields Type Description faceId String Unique faceId of the detected face, created by detection API and it will expire 24 hours after the detection call. To return this, it requires "returnFaceId" parameter to be true. recognitionModel String The 'recognitionModel' associated with this faceId. This is only returned when 'returnRecognitionModel' is explicitly set as true. faceRectangle Object A rectangle area for the face location on image. faceLandmarks Object An array of 27-point face landmarks pointing to the important positions of face components. To return this, it requires "returnFaceLandmarks" parameter to be true. faceAttributes Object Face Attributes: [ { "faceId": "c5c24a82-6845-4031-9d5d-978df9175426", "recognitionModel": "recognition_03", "faceRectangle": { "width": 78, "height": 78, "left": 394, "top": 54 }, "faceLandmarks": { "pupilLeft": { "x": 412.7, "y": 78.4 }, "pupilRight": { "x": 446.8, "y": 74.2 }, "noseTip": { "x": 437.7, "y": 92.4 }, "mouthLeft": { "x": 417.8, "y": 114.4 }, "mouthRight": { "x": 451.3, "y": 109.3 }, "eyebrowLeftOuter": { "x": 397.9, "y": 78.5 }, "eyebrowLeftInner": { "x": 425.4, "y": 70.5 }, "eyeLeftOuter": { "x": 406.7, "y": 80.6 }, "eyeLeftTop": { "x": 412.2, "y": 76.2 }, "eyeLeftBottom": { "x": 413.0, "y": 80.1 }, "eyeLeftInner": { "x": 418.9, "y": 78.0 }, "eyebrowRightInner": { "x": 4.8, "y": 69.7 }, "eyebrowRightOuter": { "x": 5.5, "y": 68.5 }, "eyeRightInner": { "x": 441.5, "y": 75.0 }, "eyeRightTop": { "x": 446.4, "y": 71.7 }, "eyeRightBottom": { "x": 447.0, "y": 75.3 }, "eyeRightOuter": { "x": 451.7, "y": 73.4 }, "noseRootLeft": { "x": 428.0, "y": 77.1 }, "noseRootRight": { "x": 435.8, "y": 75.6 }, "noseLeftAlarTop": { "x": 428.3, "y": 89.7 }, "noseRightAlarTop": { "x": 442.2, "y": 87.0 }, "noseLeftAlarOutTip": { "x": 424.3, "y": 96.4 }, "noseRightAlarOutTip": { "x": 446.6, "y": 92.5 }, "upperLipTop": { "x": 437.6, "y": 105.9 }, "upperLipBottom": { "x": 437.6, "y": 108.2 }, "underLipTop": { "x": 436.8, "y": 111.4 }, "underLipBottom": { "x": 437.3, "y": 114.5 } }, "faceAttributes": { "age": 71.0, "gender": "male", "smile": 0.88, "facialHair": { "moustache": 0.8, "beard": 0.1, "sideburns": 0.02 }, "glasses": "sunglasses", "headPose": { "roll": 2.1, "yaw": 3, "pitch": 1.6 }, "emotion": { "anger": 0.575, "contempt": 0, "disgust": 0.006, "fear": 0.008, "happiness": 0.394, "neutral": 0.013, "sadness": 0, "surprise": 0.004 }, "hair": { "bald": 0.0, "invisible": false, "hairColor": [ {"color": "brown", "confidence": 1.0}, {"color": "blond", "confidence": 0.88}, {"color": "black", "confidence": 0.48}, {"color": "other", "confidence": 0.11}, {"color": "gray", "confidence": 0.07}, {"color": "red", "confidence": 0.03} ] }, "makeup": { "eyeMakeup": true, "lipMakeup": false }, "occlusion": { "foreheadOccluded": false, "eyeOccluded": false, "mouthOccluded": false }, "accessories": [ {"type": "headWear", "confidence": 0.99}, {"type": "glasses", "confidence": 1.0}, {"type": "mask"," confidence": 0.87} ], "blur": { "blurLevel": "Medium", "value": 0.51 }, "exposure": { "exposureLevel": "GoodExposure", "value": 0.55 }, "noise": { "noiseLevel": "Low", "value": 0.12 } } } ] Error code and message returned in JSON: Error Code Error Message Description BadArgument JSON parsing error. Bad or unrecognizable request JSON body. BadArgument Invalid argument returnFaceAttributes. Supported values are: age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accessories, blur, exposure and noise in a comma-separated format. BadArgument 'recognitionModel' is invalid. BadArgument 'detectionModel' is invalid. BadArgument 'returnFaceAttributes' is not supported by detection_02. BadArgument 'returnLandmarks' is not supported by detection_02. InvalidURL Invalid image format or URL. Supported formats include JPEG, PNG, GIF(the first frame) and BMP. InvalidURL Failed to download image from the specified URL. Remote server error returned. InvalidImage Decoding error, image format unsupported. InvalidImageSize Image size is too small. The valid image file size should be larger than or equal to 1KB. InvalidImageSize Image size is too big. The valid image file size should be no larger than 6MB. { "error": { "code": "BadArgument", "message": "Request body is invalid." } } Error code and message returned in JSON: Error Code Error Message Description Unspecified Invalid subscription Key or user/plan is blocked. { "error": { "code": "Unspecified", "message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key." } } { "error": { "statusCode": 403, "message": "Out of call volume quota. Quota will be replenished in 2 days." } } Operation exceeds maximum execution time. { "error": { "code": "OperationTimeOut", "message": "Request Timeout." } } Unsupported media type error. Content-Type is not in the allowed types: { "error": { "code": "BadArgument", "message": "Invalid Media Type." } } { "error": { "statusCode": 429, "message": "Rate limit is exceeded. Try again in 26 seconds." } } @ECHO OFF curl -v -X POST "https://northeurope.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes={string}&recognitionModel=recognition_03&returnRecognitionModel=false&detectionModel=detection_02&faceIdTimeToLive=86400" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}" using System; using System.Net.Http.Headers; using System.Text; using System.Net.Http; using System.Web; namespace CSHttpClientSample { static class Program { static void Main() { MakeRequest(); Console.WriteLine("Hit ENTER to exit..."); Console.ReadLine(); } static async void MakeRequest() { var client = new HttpClient(); var queryString = HttpUtility.ParseQueryString(string.Empty); // Request headers client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request parameters queryString["returnFaceId"] = "true"; queryString["returnFaceLandmarks"] = "false"; queryString["returnFaceAttributes"] = "{string}"; queryString["recognitionModel"] = "recognition_03"; queryString["returnRecognitionModel"] = "false"; queryString["detectionModel"] = "detection_02"; queryString["faceIdTimeToLive"] = "86400"; var uri = "https://northeurope.api.cognitive.microsoft.com/face/v1.0/detect?" + queryString; HttpResponseMessage response; // Request body byte[] byteData = Encoding.UTF8.GetBytes("{body}"); using (var content = new ByteArrayContent(byteData)) { content.Headers.ContentType = new MediaTypeHeaderValue("< your content type, i.e. application/json >"); response = await client.PostAsync(uri, content); } } } } // // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) import java.net.URI; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.utils.URIBuilder; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; public class JavaSample { public static void main(String[] args) { HttpClient httpclient = HttpClients.createDefault(); try { URIBuilder builder = new URIBuilder("https://northeurope.api.cognitive.microsoft.com/face/v1.0/detect"); builder.setParameter("returnFaceId", "true"); builder.setParameter("returnFaceLandmarks", "false"); builder.setParameter("returnFaceAttributes", "{string}"); builder.setParameter("recognitionModel", "recognition_03"); builder.setParameter("returnRecognitionModel", "false"); builder.setParameter("detectionModel", "detection_02"); builder.setParameter("faceIdTimeToLive", "86400"); URI uri = builder.build(); HttpPost request = new HttpPost(uri); request.setHeader("Content-Type", "application/json"); request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request body StringEntity reqEntity = new StringEntity("{body}"); request.setEntity(reqEntity); HttpResponse response = httpclient.execute(request); HttpEntity entity = response.getEntity(); if (entity != null) { System.out.println(EntityUtils.toString(entity)); } } catch (Exception e) { System.out.println(e.getMessage()); } } } <!DOCTYPE html> <html> <head> <title>JSSample</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script> </head> <body> <script type="text/javascript"> $(function() { var params = { // Request parameters "returnFaceId": "true", "returnFaceLandmarks": "false", "returnFaceAttributes": "{string}", "recognitionModel": "recognition_03", "returnRecognitionModel": "false", "detectionModel": "detection_02", "faceIdTimeToLive": "86400", }; $.ajax({ url: "https://northeurope.api.cognitive.microsoft.com/face/v1.0/detect?" + $.param(params), beforeSend: function(xhrObj){ // Request headers xhrObj.setRequestHeader("Content-Type","application/json"); xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}"); }, type: "POST", // Request body data: "{body}", }) .done(function(data) { alert("success"); }) .fail(function() { alert("error"); }); }); </script> </body> </html> #import <Foundation/Foundation.h> int main(int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSString* path = @"https://northeurope.api.cognitive.microsoft.com/face/v1.0/detect"; NSArray* array = @[ // Request parameters @"entities=true", @"returnFaceId=true", @"returnFaceLandmarks=false", @"returnFaceAttributes={string}", @"recognitionModel=recognition_03", @"returnRecognitionModel=false", @"detectionModel=detection_02", @"faceIdTimeToLive=86400", ]; NSString* string = [array componentsJoinedByString:@"&"]; path = [path stringByAppendingFormat:@"?%@", string]; NSLog(@"%@", path); NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]]; [_request setHTTPMethod:@"POST"]; // Request headers [_request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; [_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"]; // Request body [_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]]; NSURLResponse *response = nil; NSError *error = nil; NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error]; if (nil != error) { NSLog(@"Error: %@", error); } else { NSError* error = nil; NSMutableDictionary* json = nil; NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding]; NSLog(@"%@", dataString); if (nil != _connectionData) { json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error]; } if (error || !json) { NSLog(@"Could not parse loaded json with error:%@", error); } NSLog(@"%@", json); _connectionData = nil; } [pool drain]; return 0; } <?php // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) require_once 'HTTP/Request2.php'; $request = new Http_Request2('https://northeurope.api.cognitive.microsoft.com/face/v1.0/detect'); $url = $request->getUrl(); $headers = array( // Request headers 'Content-Type' => 'application/json', 'Ocp-Apim-Subscription-Key' => '{subscription key}', ); $request->setHeader($headers); $parameters = array( // Request parameters 'returnFaceId' => 'true', 'returnFaceLandmarks' => 'false', 'returnFaceAttributes' => '{string}', 'recognitionModel' => 'recognition_03', 'returnRecognitionModel' => 'false', 'detectionModel' => 'detection_02', 'faceIdTimeToLive' => '86400', ); $url->setQueryVariables($parameters); $request->setMethod(HTTP_Request2::METHOD_POST); // Request body $request->setBody("{body}"); try { $response = $request->send(); echo $response->getBody(); } catch (HttpException $ex) { echo $ex; } ?> ########### Python 2.7 ############# import httplib, urllib, base64 headers = { # Request headers 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.urlencode({ # Request parameters 'returnFaceId': 'true', 'returnFaceLandmarks': 'false', 'returnFaceAttributes': '{string}', 'recognitionModel': 'recognition_03', 'returnRecognitionModel': 'false', 'detectionModel': 'detection_02', 'faceIdTimeToLive': '86400', }) try: conn = httplib.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("POST", "/face/v1.0/detect?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### ########### Python 3.2 ############# import http.client, urllib.request, urllib.parse, urllib.error, base64 headers = { # Request headers 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.parse.urlencode({ # Request parameters 'returnFaceId': 'true', 'returnFaceLandmarks': 'false', 'returnFaceAttributes': '{string}', 'recognitionModel': 'recognition_03', 'returnRecognitionModel': 'false', 'detectionModel': 'detection_02', 'faceIdTimeToLive': '86400', }) try: conn = http.client.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("POST", "/face/v1.0/detect?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### require 'net/http' uri = URI('https://northeurope.api.cognitive.microsoft.com/face/v1.0/detect') uri.query = URI.encode_www_form({ # Request parameters 'returnFaceId' => 'true', 'returnFaceLandmarks' => 'false', 'returnFaceAttributes' => '{string}', 'recognitionModel' => 'recognition_03', 'returnRecognitionModel' => 'false', 'detectionModel' => 'detection_02', 'faceIdTimeToLive' => '86400' }) request = Net::HTTP::Post.new(uri.request_uri) # Request headers request['Content-Type'] = 'application/json' # Request headers request['Ocp-Apim-Subscription-Key'] = '{subscription key}' # Request body request.body = "{body}" response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http| http.request(request) end puts response.body
If you’ve spent any time looking at online NLP resources, you’ve probably run into spelling correctors. Writing a simple but reasonably accurate and powerful spelling corrector can be done with very few lines of code. I found this sample program by Peter Norvig (first written in 2006) that does it in about 30 lines. As an exercise, I decided to port it over to Estonian. If you want to do something similar, here’s what you’ll need to do. First: You need some text! Norvig’s program begins by processing a text file—specifically, it extracts tokens based on a very simple regular expression. import re from collections import Counter def words(text): return re.findall(r'\w+', text.lower()) WORDS = Counter(words(open('big.txt').read())) The program builds its dictionary of known “words” by parsing a text file—big.txt—and counting all the “words” it finds in the text file, where “word” for the program means any continuous string of one or more letters, digits, and the underscore _ (r'\w+'). The idea is that the program can provide spelling corrections if it is exposed to a large number of correct spellings of a variety of words. Norvig’s ran his original program on just over 1 million words, which resulted in a dictionary of about 30,000 unique words. To build your own text file, the easiest route is to use existing corpora, if available. For Estonian, there are many freely available corpora. In fact, Sven Laur and colleagues built clear workflows for downloading and processing these corpora in Python (estnltk). I decided to use the Estonian Reference Corpus. I excluded the chatrooms part of the corpus (because it was full of spelling errors), but I still ended up with just north of 3.5 million unique words in a corpus of over 200 million total words. Measuring string similarity through edit distance Norvig takes care to explain how the program works both mechanically (i.e., the code) and theoretically (i.e., probability theory). I want to highlight one piece of that: edit distance. Edit distance is a means to measure similarity between two strings based on how many changes (e.g., deletions, additions, transpositions, …) must be made to string1 in order to yield string2. The spelling corrector utilizes edit distance to find suitable corrections in the following way. Given a test string, … If the string matches a word the program knows, then the string is a correctly spelled word. If there are no exact matches, generate all strings that are one change awayfrom the test string. If any of them are words the program knows, choose the one with the greatest frequency in the overall corpus. If there are no exact matchesormatches at an edit distance of 1, check all strings that aretwo changes awayfrom the test string. If any of them are words the program knows, choose the one with the greatest frequency in the overall corpus. If there are still no matches, return the test string—there is nothing similar in the corpus, so the program can’t figure it out. The point in the program that generates all the strings that are one change away is given below. This is the next place where you’ll need to edit the code to adapt it for another language! def edits1(word): # "All edits that are one edit away from `word`." letters = 'abcdefghijklmnopqrstuvwxyz' splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [L + R[1:] for L, R in splits if R] transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1] replaces = [L + c + R[1:] for L, R in splits if R for c in letters] inserts = [L + c + R for L, R in splits for c in letters] return set(deletes + transposes + replaces + inserts) Without getting into the technical details of the implementation, the code takes an input string and returns a set containing all strings that differ from the input in only one way: with a deletion, transposition, replacement, or insertion. So, if our input was ‘paer’, edits1 would return a set including (among other thing) par, paper, pare, and pier. The code I’ve represented above will need to be edited to be used with many non-English languages. Can you see why? The program relies on a list of letters in order to create replaces and inserts. Of course, Estonian does not have the same alphabet as English! So for Estonian, you have to change the line that sets the value for letters to match the Estonian alphabet (adding ä, ö, õ, ü, š, ž; subtracting c, q, w, x, y): letters = 'aäbdefghijklmnoöõprsštuüvzž' Once you make that change, it should be up and running! Before wrapping up this post, I want to discuss one key difference between English and Estonian that can lead to some different results. A difference between English and Estonian: morphology! In Norvig’s original implementation for English, a corpus of 1,115,504 words yielded 32,192 unique words. I chopped my corpus down to the same length, and I found a much larger number of unique words: 170,420! What’s going on here? Does Estonian just have a much richer vocabulary than English? I’d say that’s unlikely; rather, this has to do with what the program treats as a word. As far as the program is concerned, be, am, is, are, were, was, being, been are all different words, because they’re different sequences of characters. When the program counts unique words, it will count each form of be as a unique word. There is a long-standing joke in linguistics that we can’t define what a word is, but many speakers have the intuition is and am are not “different words”: they’re different forms of the same word. The problem is compounded in Estonian, which has very rich morphology. The verb be in English has 8 different forms, which is high for English. Most verbs in English have just 4 or 5. In Estonian, most verbs have over 30 forms. In fact, it’s similar for nouns, which all have 12-14 “unique” forms (times two if they can be pluralized). Because this simple spelling corrector defines word as roughly “a unique string of letters with spaces on either side”, it will treat all forms of olema ‘be’ as different words. Why might this matter? Well, this program uses probability to recommend the most likely correction for any misspelled words: choose the word (i) with the fewest changes that (ii) is most common in the corpus. Because of how the program defines “word”, the resulting probabilities are not about words on a higher level, they’re about strings, e.g., How frequent is the string ‘is’ in the corpus? As a result, it’s possible that a misspelling of a common word could get beaten by a less common word (if, for example, it’s a particularly rare form of the common word). This problem could be avoided by calculating probabilities on a version of the corpus that has been stemmed, but in truth, the real answer is probably to just build a more sophisticated spelling corrector! Spelling correction: mostly an English problem anyway Ultimately, designing spelling correction systems based on English might lead them to have an English bias, i.e., to not necessarily work as effectively on other languages. But that’s probably fine, because spelling is primarily an English problem anyway. When something is this easy to put together, you may want to do it just for fun, and you’ll get to practice some things—in this case, building a data set—along the way.
I come from the world of MATLAB and numerical computing, where for loops are shorn and vectors are king. During my PhD at UVM, Professor Lakoba's Numerical Analysis class was one of the most challenging courses I took and the deep knowledge of numerical code still sticks with me. My favorite example of a vectorization is when a colleague shared his Lorenz 96 code with me, after writing a really cool paper about it that footnoted the massive amount of computation involved. Well, vectorizing the inner loop was about 4x faster, so now the footnote is a just a carbon footprint. Fast numerical code is what makes machine learning even possible these days,though I'm not sure how many of the kids these days can write a QR decomposition in C.I'm kidding, because I haven't done it, but I sure as heck could write it in MATLAB (at one point)or in Numpy or Julia now (I'll stick to just magrittr and dplyr in R).A lot of the work I do at MassMutual is fundamentally numerical computation,and the difference of a pipeline that takes hours, even minutes from one that takes seconds is a big deal.Seconds means we can iterate, try more options, and move faster.Still, a lot of numerical code is written in pure Python (no Cython, no Numba),for the flexibility.I'm going to argue that this is a bad idea!Here's a paraphrased email from a colleague: In pseudocode, this is the 'actuarial' coding dilemma I ran into months back: EOM = 0 for months in years: PREM = 50 BOM = EOM + PREM WIT = 5 EOM = BOM – WIT A simple example, but I think shows the BOM/EOM interdependence (there are a few other variables with a similar relationship.) You can’t vectorize BOM without knowing EOM, and you can’t vectorize EOM until you know BOM. Then you might have situations where IF WIT > 0 , PREM = 0. Basically a lot of inter-dependence emerges. Now a lot of the function is not appear easily vectorizable. Well, I can vectorize this, and I did. Here's the non-vectorized version in Python: import numpy as np years = 10 bom = np.zeros(years*12) eom = np.zeros(years*12) for month in range(1, years*12): prem = 50 bom[month] = eom[month-1] + prem wit = 5 eom[month] = bom[month] - wit And here's the vectorized version: import numpy as np years = 10 prem = 50 wit = 5 eom = np.arange(years*12)*prem - np.arange(years*12)*wit # and if you still want bom as an array: bom = eom + np.arange(years*12)*wit I also wrote the for-loop even more flexibly (read: as slow as I could think to) by using a list of dicts: years = 10 prem = 50 wit = 5 result = [{'bom': 0, 'eom': 0}] for month in range(1, years*12): inner = {} inner.update({'bom': result[month-1]['eom'] + prem}) inner.update({'eom': inner['bom'] - wit}) result.append(inner) This one above returns a different type of thing, a list of dicts...not two arrays. We can also import Pandas to stuff results into for all three of the above (so they're consistent outputs, we could save to excel, etc). If we have Pandas loaded, we could use an empty dataframe for iteration, so one more option: import numpy as np import pandas as pd years = 10 prem = 50 wit = 5 df = pd.DataFrame(data={'bom': np.zeros(years*12), 'eom': np.zeros(years*12)}) for i, row in df.iterrows(): if i > 0: row.bom = df.loc[i-1, 'eom'] row.eom = row.bom - wit With all of those types of iteration, and with the the option to return a dataframe as the result, this is what we get: vectorized return_type iterate_type time slowdown True numpy -- 0.607289 1 False numpy numpy 15.2983 25 False list(dict) dict 9.2112 15 True pandas -- 37.8838 62 False pandas numpy 47.0335 77 False pandas dict 1717.72 2828 False pandas pandas 77.5634 127 False list list 1.80763 2 False numpy numpy 14.6285 24 False list c array 0.663318 1 Addendum I also add a few Cython versions of the code, showing that you can get vectorized performance without numpy, by using C. This might indeed strike the best balance between readability (keep the for-loop!) and speed. Numba may also retain the same speedups (it may be as fast as Cython/Vectorized Numpy). In both cases (Cython/Numba), you have to be careful about which datatypes you're using (no dicts or pandas!). I think that it would be possible to make the Cython + Numpy loop just as fast as vectorized numpy if you are smarter about how to integrate them. All of the code, including the Cython, is available here: https://github.com/andyreagan/vectorizing-matters.
august 25, 2020 — A guest post by Rising Odegua, Independent Researcher; Stephen Oni, Data Science Nigeria Danfo.js is an open-source JavaScript library that provides high-performance, intuitive, and easy-to-use data structures for manipulating and processing structured data. Danfo.js is heavily inspired by the Python Pandas library and provides a similar interface/API. This means that users familiar with the Panda… const dfd = require("danfojs-node") const tf = require("@tensorflow/tfjs-node") let data = tf.tensor2d([[20,30,40], [23,90, 28]]) let df = new dfd.DataFrame(data) let tf_tensor = df.tensor console.log(tf_tensor); tf_tensor.print() Tensor { kept: false, isDisposedInternal: false, shape: [ 2, 3 ], dtype: 'float32', size: 6, strides: [ 3 ], dataId: {}, id: 3, rankType: '2' } Tensor [[20, 30, 40], [23, 90, 28]] You can easily convert Arrays, JSONs, or Objects to DataFrame objects for manipulation. const dfd = require("danfojs-node") json_data = [{ A: 0.4612, B: 4.28283, C: -1.509, D: -1.1352 }, { A: 0.5112, B: -0.22863, C: -3.39059, D: 1.1632 }, { A: 0.6911, B: -0.82863, C: -1.5059, D: 2.1352 }, { A: 0.4692, B: -1.28863, C: 4.5059, D: 4.1632 }] df = new dfd.DataFrame(json_data) df.print() const dfd = require("danfojs-node") obj_data = {'A': [“A1”, “A2”, “A3”, “A4”], 'B': ["bval1", "bval2", "bval3", "bval4"], 'C': [10, 20, 30, 40], 'D': [1.2, 3.45, 60.1, 45], 'E': ["test", "train", "test", "train"] } df = new dfd.DataFrame(obj_data) df.print() const dfd = require("danfojs-node") let data = {"Name":["Apples", "Mango", "Banana", undefined], "Count": [NaN, 5, NaN, 10], "Price": [200, 300, 40, 250]} let df = new dfd.DataFrame(data) let df_filled = df.fillna({columns: ["Name", "Count"], values: ["Apples", df["Count"].mean()]}) df_filled.print() const dfd = require("danfojs-node") let data = { "Name": ["Apples", "Mango", "Banana", "Pear"] , "Count": [21, 5, 30, 10], "Price": [200, 300, 40, 250] } let df = new dfd.DataFrame(data) let sub_df = df.loc({ rows: ["0:2"], columns: ["Name", "Price"] }) sub_df.print() const dfd = require("danfojs-node") //read the first 10000 rows dfd.read_csv("file:///home/Desktop/bigdata.csv", chunk=10000) .then(df => { df.tail().print() }).catch(err=>{ console.log(err); }) const dfd = require("danfojs-node") let data = ["dog","cat","man","dog","cat","man","man","cat"] let series = new dfd.Series(data) let encode = new dfd.LabelEncoder() encode.fit(series) let sf_enc = encode.transform(series) let new_sf = encode.transform(["dog","man"]) <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <script src="https://cdn.jsdelivr.net/npm/danfojs@0.1.1/dist/index.min.js"></script> <title>Document</title> </head> <body> <div id="plot_div"></div> <script> dfd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv") .then(df => { var layout = { title: 'A financial charts', xaxis: {title: 'Date'}, yaxis: {title: 'Count'} } new_df = df.set_index({ key: "Date" }) new_df.plot("plot_div").line({ columns: ["AAPL.Open", "AAPL.High"], layout: layout }) }).catch(err => { console.log(err); }) </script> </body> </html> const dfd = require("danfojs-node") const tf = require("@tensorflow/tfjs-node") async function load_process_data() { let df = await dfd.read_csv("https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/titanic.csv") //A feature engineering: Extract all titles from names columns let title = df['Name'].apply((x) => { return x.split(".")[0] }).values //replace in df df.addColumn({ column: "Name", value: title }) //label Encode Name feature let encoder = new dfd.LabelEncoder() let cols = ["Sex", "Name"] cols.forEach(col => { encoder.fit(df[col]) enc_val = encoder.transform(df[col]) df.addColumn({ column: col, value: enc_val }) }) let Xtrain,ytrain; Xtrain = df.iloc({ columns: [`1:`] }) ytrain = df['Survived'] // Standardize the data with MinMaxScaler let scaler = new dfd.MinMaxScaler() scaler.fit(Xtrain) Xtrain = scaler.transform(Xtrain) return [Xtrain.tensor, ytrain.tensor] //return the data as tensors } Next, we create a simple neural network using TensorFlow.js. function get_model() { const model = tf.sequential(); model.add(tf.layers.dense({ inputShape: [7], units: 124, activation: 'relu', kernelInitializer: 'leCunNormal' })); model.add(tf.layers.dense({ units: 64, activation: 'relu' })); model.add(tf.layers.dense({ units: 32, activation: 'relu' })); model.add(tf.layers.dense({ units: 1, activation: "sigmoid" })) model.summary(); return model } Finally, we perform training, by first loading the model and the processed data as tensors. This can be fed directly to the neural network. async function train() { const model = await get_model() const data = await load_process_data() const Xtrain = data[0] const ytrain = data[1] model.compile({ optimizer: "rmsprop", loss: 'binaryCrossentropy', metrics: ['accuracy'], }); console.log("Training started....") await model.fit(Xtrain, ytrain,{ batchSize: 32, epochs: 15, validationSplit: 0.2, callbacks:{ onEpochEnd: async(epoch, logs)=>{ console.log(`EPOCH (${epoch + 1}): Train Accuracy: ${(logs.acc * 100).toFixed(2)}, Val Accuracy: ${(logs.val_acc * 100).toFixed(2)}\n`); } } }); }; train() The reader will notice that the API of Danfo is very similar to Pandas, and a non-Javascript programmer can easily read and understand the code. You can find the full source code of the demo above here (https://gist.github.com/risenW/f54e4e5b6d92e7b1b9b1f30e884ca83c). august 25, 2020 — A guest post by Rising Odegua, Independent Researcher; Stephen Oni, Data Science Nigeria Danfo.js is an open-source JavaScript library that provides high-performance, intuitive, and easy-to-use data structures for manipulating and processing structured data. Danfo.js is heavily inspired by the Python Pandas library and provides a similar interface/API. This means that users familiar with the Panda…
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. # -*- coding: utf-8 -*- """JSON flat file database system.""" import codecs import os import os.path import glob import re from fcntl import flock, LOCK_EX, LOCK_SH, LOCK_UN import redis import json import time from rophako.settings import Config from rophako.utils import handle_exception from rophako.log import logger redis_client = None cache_lifetime = 60*60 # 1 hour def get(document, cache=True): """Get a specific document from the DB.""" logger.debug("JsonDB: GET {}".format(document)) # Exists? if not exists(document): logger.debug("Requested document doesn't exist") return None path = mkpath(document) stat = os.stat(path) # Do we have it cached? data = get_cache(document) if cache else None if data: # Check if the cache is fresh. if stat.st_mtime > get_cache(document+"_mtime"): del_cache(document) del_cache(document+"_mtime") else: return data # Get the JSON data. data = read_json(path) # Cache and return it. if cache: set_cache(document, data, expires=cache_lifetime) set_cache(document+"_mtime", stat.st_mtime, expires=cache_lifetime) return data def commit(document, data, cache=True): """Insert/update a document in the DB.""" # Need to create the file? path = mkpath(document) if not os.path.isfile(path): parts = path.split("/") parts.pop() # Remove the file part directory = list() # Create all the folders. for part in parts: directory.append(part) segment = "/".join(directory) if len(segment) > 0 and not os.path.isdir(segment): logger.debug("JsonDB: mkdir {}".format(segment)) os.mkdir(segment, 0o755) # Update the cached document. if cache: set_cache(document, data, expires=cache_lifetime) set_cache(document+"_mtime", time.time(), expires=cache_lifetime) # Write the JSON. write_json(path, data) def delete(document): """Delete a document from the DB.""" path = mkpath(document) if os.path.isfile(path): logger.info("Delete DB document: {}".format(path)) os.unlink(path) del_cache(document) def exists(document): """Query whether a document exists.""" path = mkpath(document) return os.path.isfile(path) def list_docs(path): """List all the documents at the path.""" path = mkpath("{}/*".format(path)) docs = list() for item in glob.glob(path): name = re.sub(r'\.json$', '', item) name = name.split("/")[-1] docs.append(name) return docs def mkpath(document): """Turn a DB path into a JSON file path.""" if document.endswith(".json"): # Let's not do that. raise Exception("mkpath: document path already includes .json extension!") return "{}/{}.json".format(Config.db.db_root, str(document)) def read_json(path): """Slurp, decode and return the data from a JSON document.""" path = str(path) if not os.path.isfile(path): raise Exception("Can't read JSON file {}: file not found!".format(path)) # Don't allow any fishy looking paths. if ".." in path: logger.error("ERROR: JsonDB tried to read a path with two dots: {}".format(path)) raise Exception() # Open and lock the file. fh = codecs.open(path, 'r', 'utf-8') flock(fh, LOCK_SH) text = fh.read() flock(fh, LOCK_UN) fh.close() # Decode. try: data = json.loads(text) except: logger.error("Couldn't decode JSON data from {}".format(path)) handle_exception(Exception("Couldn't decode JSON from {}\n{}".format( path, text, ))) data = None return data def write_json(path, data): """Write a JSON document.""" path = str(path) # Don't allow any fishy looking paths. if ".." in path: logger.error("ERROR: JsonDB tried to write a path with two dots: {}".format(path)) raise Exception() logger.debug("JsonDB: WRITE > {}".format(path)) # Open and lock the file. fh = None if os.path.isfile(path): fh = codecs.open(path, 'r+', 'utf-8') else: fh = codecs.open(path, 'w', 'utf-8') flock(fh, LOCK_EX) # Write it. fh.truncate(0) fh.write(json.dumps(data, sort_keys=True, indent=4, separators=(',', ': '))) # Unlock and close. flock(fh, LOCK_UN) fh.close() ############################################################################ # Redis Caching Functions # ############################################################################ def get_redis(): """Connect to Redis or return the existing connection.""" global redis_client if not redis_client: redis_client = redis.StrictRedis( host = Config.db.redis_host, port = Config.db.redis_port, db = Config.db.redis_db, ) return redis_client def set_cache(key, value, expires=None): """Set a key in the Redis cache.""" key = Config.db.redis_prefix + key try: client = get_redis() client.set(key, json.dumps(value)) # Expiration date? if expires: client.expire(key, expires) except: logger.error("Redis exception: couldn't set_cache {}".format(key)) def get_cache(key): """Get a cached item.""" key = Config.db.redis_prefix + key value = None try: client = get_redis() value = client.get(key) if value: value = json.loads(value) except: logger.warning("Redis exception: couldn't get_cache {}".format(key)) value = None return value def del_cache(key): """Delete a cached item.""" key = Config.db.redis_prefix + key client = get_redis() client.delete(key)
Description You are playing the Bulls and Cows game with your friend. You write down a secret number and ask your friend to guess what the number is. When your friend makes a guess, you provide a hint with the following info: The number of “bulls”, which are digits in the guess that are in the correct position. The number of “cows”, which are digits in the guess that are in your secret number but are located in the wrong position. Specifically, the non-bull digits in the guess that could be rearranged such that they become bulls. Given the secret number secret and your friend’s guess guess, return the hint for your friend’s guess. The hint should be formatted as "xAyB", where x is the number of bulls and y is the number of cows. Note that both secret and guess may contain duplicate digits. Example 1: Input:secret = "1807", guess = "7810"Output:"1A3B"Explanation:Bulls are connected with a '|' and cows are underlined: "1807" | "7810" Example 2: Input:secret = "1123", guess = "0111"Output:"1A1B"Explanation:Bulls are connected with a '|' and cows are underlined: "1123" "1123" | or | "0111" "0111" Note that only one of the two unmatched 1s is counted as a cow since the non-bull digits can only be rearranged to allow one 1 to be a bull. Example 3: Input:secret = "1", guess = "0"Output:"0A0B" Example 4: Input:secret = "1", guess = "1"Output:"1A0B" Constraints: 1 <= secret.length, guess.length <= 1000 secret.length == guess.length secretandguessconsist of digits only. Explanation two pass Python Solution class Solution: def getHint(self, secret: str, guess: str) -> str: bulls = 0 cows = 0 secret_dict = {} for c1 in secret: secret_dict[c1] = secret_dict.get(c1, 0) + 1 for c1, c2 in zip(secret, guess): if c1 == c2: bulls += 1 secret_dict[c1] = secret_dict.get(c1) - 1 for c1, c2 in zip(secret, guess): if c1 != c2 and c2 in secret_dict and secret_dict[c2] > 0: cows += 1 secret_dict[c2] = secret_dict.get(c2) - 1 return "{}A{}B".format(bulls, cows) Time Complexity: ~N Space Complexity: ~1
本篇记录 LeetCode 算法部分第 16-20 题。 3Sum Closest 第 16 题 3Sum Closest 给定一个包含 n 个整型数的数组 S,找出 S 中的三个数,使得三者求和的结果和目标值最接近。返回求和结果,假定 S 中一定存在唯一解。 举例:数组 S = { -1 2 1 -4 },目标值 target = 1。最接近目标值的求和结果为 (-1 + 2 + 1 = 2) 这题是第 15 题的延伸。沿用前一题的思路,先对数组进行排序,取 a(i) + a(i+k) + a(n) 求和,如果结果和目标值一致,则直接将求和结果返回;如果结果大于目标值,则表明需要减小下标 n 的值,逐次减小,每次比较当前求和结果与目标值的差值和前一次求和比较的差值,取绝对值较小的保留,同时保留当前的求和结果;如果结果小于目标值,则需要增大下标 (i+k)。Java 代码如下 // ThreeSumClosest.java v1.0 public class Solution { public int threeSumClosest(int[] nums, int target) { int sum = Integer.MAX_VALUE; int diff = Integer.MAX_VALUE; int count = nums.length; Arrays.sort(nums); for (int i = 0; i < count - 2; i++) { int j = i + 1, k = count - 1; while (j < k) { int curSum = nums[i] + nums[j] + nums[k]; int curDiff = curSum - target; if (curDiff == 0) return curSum; diff = Math.abs(diff) < Math.abs(curDiff) ? diff : curDiff; sum = target + diff; if (curDiff > 0) { k--; while (j < k && nums[k] == nums[k + 1]) k--; } else { j++; while (j < k && nums[j] == nums[j - 1]) j++; } } } return sum; } } Status Tests Run Time Language Accepted 120 / 120 13 ms Java // ThreeSumClosest.java v1.1 public class Solution { public int threeSumClosest(int[] nums, int target) { int sum = Integer.MAX_VALUE; int diff = Integer.MAX_VALUE; int count = nums.length; Arrays.sort(nums); for (int i = 0; i < count - 2;) { int j = i + 1, k = count - 1; while (j < k) { int curSum = nums[i] + nums[j] + nums[k]; int curDiff = Math.abs(curSum - target); if (curDiff == 0) return curSum; if (curDiff < diff) { diff = curDiff; sum = curSum; } if (curSum > target) { k--; while (j < k && nums[k] == nums[k + 1]) k--; } else { j++; while (j < k && nums[j] == nums[j - 1]) j++; } } i++; while (i < count - 2 && nums[i] == nums[i - 1]) i++; } return sum; } } Status Tests Run Time Language Accepted 120 / 120 11 ms Java Python 代码 # 3sum_closest.py class Solution(object): def threeSumClosest(self, nums, target): """ :type nums: List[int] :type target: int :rtype: int """ res, diff = sys.maxsize, sys.maxsize count = len(nums) nums.sort() for i in range(count - 2): if i > 0 and i < count -2 and nums[i] == nums[i - 1]: continue j, k = i + 1, count - 1 while j < k: cur_sum = nums[i] + nums[j] + nums[k] cur_diff = cur_sum - target if cur_diff == 0: return cur_sum diff = diff if abs(diff) < abs(cur_diff) else cur_diff res = target + diff if cur_diff > 0: k -= 1 while j < k and nums[k] == nums[k + 1]: k -= 1 else: j += 1 while j < k and nums[j] == nums[j - 1]: j += 1 return res Status Tests Run Time Language Accepted 120 / 120 148 ms Python Letter Combinations of a Phone Number 给定一个数字型的字符串,返回这些数字在手机九宫格键盘上所有可能表示的字母组合。 举例,输入字符串 “23”,输出结果:[ “ad”, “ae”, “af”, “bd”, “be”, “bf”, “cd”, “ce”, “cf” ] 抽象的看这道题,其实就是排列组合。键盘上 “23” 按钮可能的输出结果就是 “abc” 和 “def” 两个字符串中字符的所有组合情况。从每个数字所代表的字符串中选取一个,并和下一个数字所代表的字符串中字符组合,逐个拼接,很显然,可以通过递归处理,递归深度为数字键的个数。 // LetterCombinationsOfaPhoneNumber.java public class Solution { private static String[] keymap = {"abc", "def", "ghi", "jkl", "mno", "pqrs", "tuv", "wxyz"}; public List<String> letterCombinations(String digits) { List<String> res = new ArrayList<>(); if (digits == null || digits.length() == 0) return res; this.combineLetters(digits, "", digits.length(), 0, res); return res; } private void combineLetters(String digits, String str, int len, int pos, List<String> list) { String key = keymap[digits.charAt(pos) - '2']; for (int i = 0; i < key.length(); i++) { if (pos == len - 1) list.add(str + key.charAt(i)); else combineLetters(digits, str + key.charAt(i), len, pos + 1, list); } } } Status Tests Run Time Language Accepted 25 / 25 1 ms Java 4Sum 第 18 题 4Sum 给定 n 个整型数组成的数组 S,从中找出所有可能的 4 个整数 a, b, c, d 使得 a + b + c + d = 目标值 target。要求去重。 如 S = [ 1, 0, -1, 0, -2, 2 ] target = 0,则结果为 [ [-1, 0, 0, 1], [-2, -1, 1, 2], [-2, 0, 0, 2] ] LeetCode 第 15 题 3Sum 已经给出了数组中取 3 个数求和为零,本题为 4 个数求和为目标值。可以直接借用 3Sum 的算法,先选取数组中一个数,把剩下的数组元素和所需要的差值传递给 3Sum 方法,如果 3Sum 返回了有效结果,则把当前选取的数分别插入 3Sum 的结果中。代码如下: // FourSum.java v1.0 public class Solution { public List<List<Integer>> fourSum(int[] nums, int target) { List<List<Integer>> res = new ArrayList<>(); if (nums == null || nums.length < 4) return res; Arrays.sort(nums); int count = nums.length; for (int i = 0; i < count - 3; i++) { while (i > 0 && i < count - 3 && nums[i] == nums[i - 1]) i++; int diff = target - nums[i]; List<List<Integer>> lists = this.threeSum(Arrays.copyOfRange(nums, i + 1, count), diff); if (lists.isEmpty()) continue; for (List<Integer> list : lists) { list.add(nums[i]); res.add(list); } } return res; } // 3Sum private List<List<Integer>> threeSum(int[] nums, int target) { List<List<Integer>> lists = new ArrayList<>(); if (nums == null || nums.length < 3) return lists; int count = nums.length, i = 0; while (i < count - 2) { int j = i + 1; int k = count - 1; while (j < k) { int sum = nums[i] + nums[j] + nums[k]; if (sum == target) { List<Integer> list = new ArrayList<>(); list.add(nums[i]); list.add(nums[j++]); list.add(nums[k--]); lists.add(list); while (j < k && nums[j] == nums[j - 1]) j++; while (j < k && nums[k] == nums[k + 1]) k--; } else if (sum < target) { j++; while (j < k && nums[j] == nums[j - 1]) j++; } else { k--; while (j < k && nums[k] == nums[k + 1]) k--; } } i++; while (i < count - 2 && nums[i] == nums[i - 1]) i++; } return lists; } } Status Tests Run Time Language Accepted 282 / 282 71 ms Java Remove Nth Node From End of List 给定一个链表,移除倒数第 n 个节点,返回该链表的首节点。 例如链表 1 -> 2 -> 3 -> 4 -> 5,n = 2,移除倒数第 2 节点后的链表为1 -> 2 -> 3 -> 5假定 n 是有效的。 普通链表是单向,假如是正向的移除第 n 个节点,很好做,但是反向的移除就需要动下脑筋。最直接的办法就是先遍历链表求其长度,减去 n 就是正向的节点位置,然后再做依次顺序遍历,找到要移除的节点。这种方法需要两次遍历。还有一种略微机智的方法,只需要做一次遍历,先让早起步的头指针移动 n 次,再让另一个慢起步的头指针开始移动,这样等早起步头指针到达最后一个节点的时候,后一个指针正好到达要移除的节点。 // RemoveNthNodeFromEndOfList.java /** * Definition for singly-linked list. * public class ListNode { * int val; * ListNode next; * ListNode(int x) { val = x; } * } */ public class Solution { public ListNode removeNthFromEnd(ListNode head, int n) { ListNode res = new ListNode(0); res.next = head; ListNode node1 = res, node2 = res; for (int i = 0; i < n; i++) node1 = node1.next; while (node1.next != null) { node1 = node1.next; node2 = node2.next; } node2.next = node2.next.next; return res.next; } } Status Tests Run Time Language Accepted 207 / 207 1 ms Java Valid Parentheses 第 20 题 Valid Parentheses 给出一个仅包含字符 '(',')','{','}','['和']'的字符串,判断它是否有效。 有效的字符串必须是闭合的,如"()"{[()]}和"()[]{}"是有效的,而"(]"和"([)]"是无效的。 从有效字符串的形式 "()" {[()]} 和 "()[]{}" 中可以看出,这很像是四则运算的中缀表达式。而四则运算的中缀表达式可以通过栈这种数据结构来存储。因为左半括号必定再其对应右半括号的前面,所以遍历到当前字符为左半括号时,将其入栈。遍历到右半括号时,将栈顶字符出栈,比较是否匹配。直到栈元素为空。 // ValidParentheses.java public class Solution { public boolean isValid(String s) { char[] stack = new char[s.length()]; int index = 0; for (int i = 0; i < s.length(); i++) { if (s.charAt(i) == '(' || s.charAt(i) == '[' || s.charAt(i) == '{') { stack[index++] = s.charAt(i); } else if (s.charAt(i) == ')') { if (index == 0 || stack[--index] != '(') return false; } else if (s.charAt(i) == ']') { if (index == 0 || stack[--index] != '[') return false; } else { if (index == 0 || stack[--index] != '{') return false; } } return index == 0; } } Status Tests Run Time Language Accepted 65 / 65 0 ms Java 下一篇:LeetCode 探险第五弹
A Python Tutorial, the Basics 🐍 A very easy Python Tutorial! 🐍 #Tutorial Jam @elipie's jam p i n g p i n g Here is a basic tutorial for Python, for beginners! Table of Contents: 1. The developer of python 2. Comments/Hashtags 3. Print and input statements f' strings 4. If, Elif, Else statements 5. Common Modules 1. Developer of Python It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python. For more about Guido van Rossum, click here 2. Comments/Hastags Comments are side notes you can write in python. They can be used, as I said before: sidenotes instructions or steps etc. How to write comments: #This is a comment The output is nothing because: It is a comment and comments are invisible to the computer Comments are not printed in Python So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer. 3. Print and Input statements 1. Print Statements Print statements, printed as print, are statements used to print sentences or words. So for example: print("Hello World!") The output would be: Hello World! So you can see that the print statement is used to print words or sentences. 2. Input Statements Input statements, printed as input, are statements used to 'ask'. For example: input("What is your name?") The output would be: What is your name? However, with inputs, you can write in them. You can also 'name' the input. Like this: name = input("What is your name?") You could respond by doing this: What is your name? JBYT27 So pretty much, inputs are used to make a value that you can make later. Then you could add a if statement, but lets discuss that later. 3. f strings f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this: print(f"") The output right now, is nothing. You didn't print anything. But say you add this: print(f"Hello {name}!") It would work, only if the name was named. In other words, say you had a input before and you did this to it: name = input() Then the f string would work. Say for the input, you put in your name. Then when the print statement would print: Hello (whatever your name was)! Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this: name = input() ... print("Hello ", name, "!") The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error. Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec. 4. If, Elif, Else Statements 1. If Statements If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is: name = input("What is your name?") #asking for name if name == "JBYT27": print("Hello Administrator!") The output could be: What is your name? JBYT27Hello Administrator! However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in! 2. Elif Statements Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this: if name == "JBYT27": print("Hello Administrator!") elif name == "Code": print("Hello Code!") It's just adding more if statements, just adding a else to it! 3. Else Statements Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code): if name == "JBYT27": print("Hello admin!") elif name == "Squid": print("Hello Lord Squod!") else: print(f"Hello {name}!") 5. Common Modules Common modules include: os time math sys replit turtle tkinter random etc. So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules? Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code: import os ... When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this: import replit ... replit.clear() But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code: from math import pi, sqrt Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :) Next is the time module You can use the time module for: time delay scroll text And yeah, that's pretty much it (i think) Note: All of the import syntax is the same except for the names Next is tkinter, turtle You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl. You can use the turtle for drawing, it isn't used much for web developing though. The math and sys The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here Random The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code: import random ... a_list = ["JBYT27","pie","cat","dog"] ... random.choice(a_list) The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are: choice randrange etc. And that's all for modules. If you want links, click below. Links for modules: And that's it! Hooray! We made it through without sleeping! Credits to: Many coders for tutorials Books and websites replit etc. Links: Web links: ranging from a few days or hoursifyoulikereading Video links: ranging from 1-12 hoursifyoudon'tlike reading Otherwise: ranging from 5 hours-a few daysreplittutorial links I hope you enjoyed this tutorial! I'll cya on the next post! stay safe! thats great tutorial for beginners. It's well explained. Waiting for your next tutorials. Also check"https://favtutor.com/blog-details/7-Python-Projects-For-Beginners" for python beginners project. It's also very helpful. Hope you like it.
A bid expresses how much you value your ad reaching a target audience and delivering results on your optimization_goal. bid_amount is the amount you want to spend to reach a given optimization_goal. At Facebook's ad auction, Facebook evaluates your bid_amount and the probability of reaching your optimization_goal. We apply an effective bid so you only win auctions and have your ads delivered when you are likely to reach your optimization_goal. Bidding and Optimization core concepts include: When you choose your bid: You can also set objective and billing_event but neither directly impact bid_amount or your effective bid. Your actual cost is usually equal or less than bid_amount; occasionally it may be a little higher, see Billing Event. For example, use these settings to spend about $10.00 for 1,000 daily unique views: objective: APP_INSTALLSoptimization_goal: REACHbilling_event: IMPRESSIONS However, to spend $10.00 for each app install, use these settings: objective: APP_INSTALLSoptimization_goal: APP_INSTALLSbilling_event: any valid option billing_event does not affect auction prices directly. It indirectly affects your actual spending due to Ads Delivery Pacing. Define advertising goals you want to achieve when Facebook delivers your ads. We use your ad set's optimization_goal to decide which people get your ad. For example with APP_INSTALLS, Facebook delivers your ad to people who are more likely to install your app. optimization_goal defaults to a goal associated with your objective. For example, if objective is APP_INSTALLS, optimization_goal defaults to APP_INSTALLS. If you specify another optimization_goal, Facebook delivers your ad to as many daily unique people as possible, regardlesss of the probability anyone takes action towards your objective. See Validation Best Practices. In Marketing API v2.4, we replaced forms of bidding such as bid_type, bid_info, and conversion_specs, with new fields: optimization_goal, bid_amount, and billing_event. This decoupled campaign optimization such APP_INSTALLS or LINK_CLICKS from billing, such as charges per IMPRESSIONS, APP_INSTALLS, or LINK_CLICKS. See Optimization Simplification for more information. Certain campaign objectives support only certain ad set optimization_goals: Campaign Objective Default optimization_goal Other valid optimization_goal None. Tracking specs help you track actions taken by users interacting with your ad. Tracking specs only track; they do not optimize ads delivery for an action or charge you based on that action. You can use tracking specs with any bid type and creative. To specify tracking specs use the ad field, tracking_specs. We automatically select a default tracking spec set based on your objective, but you can track additional actions. For example, a link page post ad with POST_ENGAGEMENT objective defaults to post_engagement tracking, but you setup a Facebook pixel on the offsite page and track other actions. The default tracking spec for the page post engagement objective is action.type = post_engagement with ids for the post and page: curl -X POST \ -F 'name="My First Ad"' \ -F 'adset_id="<AD_SET_ID>"' \ -F 'creative={ "creative_id": "<CREATIVE_ID>" }' \ -F 'tracking_specs={ "action.type": "post_engagement", "post": "<POST_ID>", "page": "<PAGE_ID>" }' \ -F 'status="PAUSED"' \ -F 'access_token=<ACCESS_TOKEN>' \ https://graph.facebook.com/v9.0/act_<AD_ACCOUNT_ID>/ads For custom tracking specs, see Tracking Specs, Custom.
25'ten fazla konu seçemezsiniz Konular bir harf veya rakamla başlamalı, kısa çizgiler ('-') içerebilir ve en fazla 35 karakter uzunluğunda olabilir. # -*- coding: utf-8 -*- from __future__ import unicode_literals, absolute_import """Endpoints for user login and out.""" from flask import Blueprint, request, redirect, url_for, session, flash import re import rophako.model.user as User from rophako.utils import template mod = Blueprint("account", __name__, url_prefix="/account") @mod.route("/") def index(): return redirect(url_for(".login")) @mod.route("/login", methods=["GET", "POST"]) def login(): """Log into an account.""" if request.method == "POST": username = request.form.get("username", "") password = request.form.get("password", "") # Lowercase the username. username = username.lower() if User.check_auth(username, password): # OK! db = User.get_user(username=username) session["login"] = True session["username"] = username session["uid"] = db["uid"] session["name"] = db["name"] session["role"] = db["role"] # Redirect them to a local page? url = request.form.get("url", "") if url.startswith("/"): return redirect(url) return redirect(url_for("index")) else: flash("Authentication failed.") return redirect(url_for(".login")) return template("account/login.html") @mod.route("/logout") def logout(): """Log out the user.""" session["login"] = False session["username"] = "guest" session["uid"] = 0 session["name"] = "Guest" session["role"] = "user" flash("You have been signed out.") return redirect(url_for(".login")) @mod.route("/setup", methods=["GET", "POST"]) def setup(): """Initial setup to create the Admin user account.""" # This can't be done if users already exist on the CMS! if User.exists(uid=1): flash("This website has already been configured (users already created).") return redirect(url_for("index")) if request.method == "POST": # Submitting the form. username = request.form.get("username", "") name = request.form.get("name", "") pw1 = request.form.get("password1", "") pw2 = request.form.get("password2", "") # Default name = username. if name == "": name = username # Lowercase the user. username = username.lower() if User.exists(username=username): flash("That username already exists.") return redirect(url_for(".setup")) # Validate the form. errors = validate_create_form(username, pw1, pw2) if errors: for error in errors: flash(error) return redirect(url_for(".setup")) # Create the account. uid = User.create( username=username, password=pw1, name=name, role="admin", ) flash("Admin user created! Please log in now.".format(uid)) return redirect(url_for(".login")) return template("account/setup.html") def validate_create_form(username, pw1=None, pw2=None, skip_passwd=False): """Validate the submission of a create-user form. Returns a list of error messages if there were errors, otherwise it returns None.""" errors = list() if len(username) == 0: errors.append("You must provide a username.") if re.search(r'[^A-Za-z0-9-_]', username): errors.append("Usernames can only contain letters, numbers, dashes or underscores.") if not skip_passwd: if len(pw1) < 3: errors.append("You should use at least 3 characters in your password.") if pw1 != pw2: errors.append("Your passwords don't match.") if len(errors): return errors else: return None
1、安装pyecharts pip install pyecharts 2、柱状图(个人笔记) 完成效果图: 效果图 代码: # 柱状图 from pyecharts.charts import Bar from pyecharts import options as opt from pyecharts.globals import ThemeType from example.commons import Faker as fa import random # 生成随机数据 attr = fa.days_attrs v1 = [random.randrange(10, 150) for _ in range(31)] v2 = [random.randrange(10, 150) for _ in range(31)] # 初始化一个Bar对象,并设定一写初始化设置 bar = Bar(init_opts=opt.InitOpts(theme=ThemeType.WHITE)) # 添加数据 bar.add_xaxis(attr) bar.add_yaxis("test1", v1, gap="0", category_gap="20%", color=fa.rand_color()) bar.add_yaxis("test2", v2, is_selected=False, gap="0%", category_gap="20%", color=fa.rand_color()) # 全局配置 bar.set_global_opts(title_opts=opt.TitleOpts(title="主标题", subtitle="副标题"), toolbox_opts=opt.ToolboxOpts(), yaxis_opts=opt.AxisOpts(axislabel_opts=opt.LabelOpts(formatter="{value}/月"), name="这是y轴"), xaxis_opts=opt.AxisOpts( axisline_opts=opt.AxisLineOpts(linestyle_opts=opt.LineStyleOpts(color='blue')), name="这是x轴"), datazoom_opts=opt.DataZoomOpts() ) bar.set_series_opts(markpoint_opts=opt.MarkPointOpts(data=[opt.MarkPointItem(type_="max", name="最大值"), opt.MarkPointItem(type_="min", name="最小值"), opt.MarkPointItem(type_="average", name="平均值")]), markline_opts=opt.MarkLineOpts(data=[opt.MarkLineItem(type_="min", name="最小值"), opt.MarkLineItem(type_="max", name="最大值"), opt.MarkLineItem(type_="average", name="平均值")])) # 指定生成html文件路径 bar.render('chart/test.html') 方法参数: Bar(): 图表初始化时一些参数设定,此处指定了图表主题类型 add_xaxis:添加x轴数据 add_yaxis: is_selected: 打开图表时是否默认加载 gap: 不同系列的柱间距离,为百分比 category_gap: 同一系列的柱间距离,默认为类目间距的 20% color: 指定柱状图Label(柱子)的颜色 title_opts:图标标题相关设置 toolbox_opts:工具栏相关设置 yaxis_opts/xaxis_opts:坐标轴相关设置 axislabel_opts:坐标轴标签(字)相关设置 axisline_opts:坐标轴轴线相关设置 datazoom_opts:坐标轴轴线相关设置 markpoint_opts:标记点相关设置 markpoint_opts:label_opts=opts.LabelOpts(is_show=False),标签值是否叠加 markline_opts:标记线相关设置 https://www.jianshu.com/p/400f7ce928eb
手書き数字認識 今回は前回に続きニューラルネットワークを扱います。 データはscikit-learnの手書き数字画像で、以下のような流れとなります。 64個の特徴量(8×8の画像データ)を持つデータセットを用意 学習用とテスト用に分割しtensor型に変換 モデルにデータを学習させ、それを評価 精度が向上する(誤差が小さくなる)ことをグラフで確認 '''ライブラリの準備''' import torch import torch.nn as nn import torch.optim as optim import numpy as np import matplotlib.pyplot as plt '''データセットの準備''' from sklearn import datasets from sklearn.model_selection import train_test_split digits_data = datasets.load_digits() ◆ライブラリとデータセットの準備 まずはライブラリとデータセット(手書き画像)を用意します。 この時点でdigits_dataのデータセットは、「画像ファイルの集まりではなく中身が数値で表現されている」という点に注意してください。 この画像データは8×8ピクセルの16階調のグレースケール画像を、薄い(0)〜濃い(16)の数値で表現されています。 ◆データセットの表示 またデータセットが辞書型なのでdir()を使ってキーを確認することができますが、皆さんは特に確認なさらなくても大丈夫です。 ちなみにデータセットの中身はprint(digits_data)で確認でき、それぞれの要素の配列はshape()を使って確認できます。 ※例えば、digits_dataの中の「data」というキーを確認したいときには、print(digits_data.data.shape)を実行すれば良いです。 '''データの表示''' plt.figure(figsize=(10, 4)) for i in range(10): ax = plt.subplot(2, 5, i+1) plt.imshow(digits_data.data[i].reshape(8, 8), cmap="Greys_r") plt.title(i) ax.axis('off') plt.show() まずは表示する画像の数をn_imgという変数にセットします。 そして画像サイズをplt.figure(figsize=(横インチ, 縦インチ), dpi=解像度, facecolor=グラフの余白色, edgecolor=’k’)で指定します。 一度に複数画像を閲覧したかったので、plt.subplot(行の分割数, 列の分割数, 左上から何番目か)を使いました。 次にplt.imshowを使って、数値データから画像ファイルを表示します。 ここではreshapeを使って画像サイズを「64×1→8×8」に変換し、cmapで画像の色を指定しました。 ※上のような解像度の低い0~9の数字の画像(8×8サイズ)が1797個含まれています。 '''学習データの準備''' x_train, x_test, y_train, y_test = train_test_split(digits_data.data, digits_data.target, test_size=0.25, random_state=42) '''tensor型へ変換''' x_train = torch.FloatTensor(x_train) y_train = torch.LongTensor(y_train) x_test = torch.FloatTensor(x_test) y_test = torch.LongTensor(y_test) '''モデルの定義''' net = nn.Sequential( nn.Linear(64, 32), nn.ReLU(), nn.Linear(32, 16), nn.ReLU(), nn.Linear(16, 10) ) '''最適化手法の定義''' criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.01) ◆学習データの準備 まずは前回同様、train_test_spritでデータセットを(学習用とテスト用に)分割します。 その際にtest_sizeでテストデータの割合を決定し、random_stateで乱数を一定にしました。 ※random_stateを設定することで、誰が実行しても同じ結果を得ることができます。再現性が求められる研究には必須の設定です。 ◆モデルと最適化手法の定義 その後データをtensor型に変換し、nn.Sequential内でモデルを定義しました。 前回は以下の画像のようにnn.Moduleクラスを継承して、コンストラクタ(__init__)やfoward内で関数を記述しました。 しかしnn.Sequentialを使う場合は直接、全結合層(nn.Linear)や活性化関数(F.relu)を流れるように記述することができるのです。 モデルが定義できたら今度は最適化手法を定義します。 今回も交差エントロピー誤差(nn.CrossEntropyLoss)で評価し、optimizerにはSDGを採用しました。 '''損失(loss)を記録する空の配列を用意''' record_loss_train = [] record_loss_test = [] '''学習''' for i in range(1001): optimizer.zero_grad() x_train_net = net(x_train) x_test_net = net(x_test) loss_train = criterion(x_train_net, y_train) loss_test = criterion(x_test_net, y_test) record_loss_train.append(loss_train.item()) record_loss_test.append(loss_test.item()) loss_train.backward() optimizer.step() if i%100 == 0: print("Epoch:", i, "Loss_Train:", loss_train.item(), "Loss_Test:", loss_test.item()) 損失の推移を記録するために学習データ(train)とテストデータ(test)でそれぞれ空の配列を用意しました。 そしていつも通り、以下の処理をfor文で回すことで学習を進めます。 optimizer.zero_grad()で勾配をクリア 特徴量を持ったデータをモデル(net)に学習させる(順伝播) 学習したデータと正解データの誤差を評価 .backward()で各変数の勾配を求める(逆伝播) optimizer.step()でパラメータの更新 確認のためにprintで損失を出力しましたが、学習が進む(epochが増える)ごとに誤差が小さくなっていることが分かります。 ※復習ですが、.item()を使うことでtensorの値のみを取得できます。 '''損失推移(record_loss)の可視化''' plt.plot(range(len(record_loss_train)), record_loss_train, label="Train") plt.plot(range(len(record_loss_test)), record_loss_test, label="Test") plt.legend() plt.xlabel("Epochs") plt.ylabel("Loss") plt.show() '''正解率''' accuracy = (net(x_test).argmax(1) == y_test).sum().item() / len(y_test) print("正解率:", round(accuracy*100,1) , "%") ◆損失推移の可視化 先ほどfor文を回した際に記録した損失の推移(record_loss_train, record_loss_test)を可視化します。 ちなみにplt.plot(x, y)でx,y軸に座標を打つことができ、range(len(record_loss_train))は0~1001の範囲を表します。 ※この0~1001の各インデックスに対応する結果(record_loss_train)を表示することで、上の曲線が実現します。 ◆正解率 次に正解率(全要素の中の一致数/全要素数)の計算について解説します。 一致数は「net(x_test):学習済データ」と「y_test:正解データ」を比較するのですが、これらはtensor型なので.sum().item()の処理を加える必要があります。 中身はこんな感じです。 net(x_test)は各画像において「0~9の数値だと認識された確率」を示しており、今回は*argmax(詳しくは後述)でその確率が最大となるインデックス、すなわち「最も確率が高いと認識された数」を取得しています。 ◆argmax argmax()は「最大値をとるインデックスを返す」という役割を持ち、引数に0/1をセットすることで列方向/行方向を指定します。 '''実験''' img_id = 1234 x_pred = digits_data.data[img_id] image = x_pred.reshape(8, 8) plt.imshow(image, cmap="Greys_r") plt.axis('off') plt.show() y_pred = net(torch.FloatTensor(x_pred)) print("正解:", digits_data.target[img_id], "予測結果:", y_pred.argmax().item()) 実験では各自で「img_id」の値を変更することで、様々な手書き数字を識別させることができます。 (正解率が90%を超えているのでほとんど正しい結果が返ってくると思いますが) さて、ここまでPyTorchを使った予測も少しできるようになったので、今度はその精度の高め方について見ていきたいと思います。 モデルの精度を高める モデルの精度を高める前に、気をつけなければならないのが「重みの初期化」です。 機械学習でモデルを回す度に重みが最適化されるので、それを初期化する必要があります。 ※以下のコードを「学習」の前に挿入してください。 '''重みの初期化(前の学習を初期化)''' def init_weights(m): if type(m) == nn.Linear: torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) net.apply(init_weights) 準備が整いましたら各パラメータの値を調整し、結果をみていきます。 ◆学習率 ※最適化手法の定義の「optimizer = optim.SGD(net.parameters(), lr=0.01)」 ・lr=0.001のとき:86.5% ・lr=0.005のとき:95.3% ・lr=0.01のとき:94.7% ・lr=0.05のとき:93.8% ・lr=0.1のとき:96.9% ◆テストデータの割合 ※学習データ準備の「x_train, x_test, y_train, y_test = train_test_split(digits_data.data, digits_data.target, test_size=0.25, random_state=0)」 ・test_size=0.10のとき:97.2% ・test_size=0.18のとき:96.0% ・test_size=0.25のとき:94.7% ・test_size=0.32のとき:95.7% ・test_size=0.40のとき:95.4% ◆エポック数(処理を繰り返す回数) ※学習の「for i in range(1001)」 ・epoch=301のとき:89.3% ・epoch=501のとき:93.1% ・epoch=1001のとき:94.7% ・epoch=1501のとき:96.0% ・epoch=2001のとき:97.1% ◆中間層のノード数 ※モデルの定義の「net = nn.Sequential()の中のnn.Linearの64→32→16→10」 ・64→16→10→10のとき:96.0% ・64→16→16→10のとき:94.0% ・64→32→16→10のとき:94.7% ・64→64→64→10のとき:96.2% ・64→128→64→10のとき:97.6% ◆中間層の数 ※モデルの定義の「net = nn.Sequential()の中のnn.Linearの64→32→16→10」 ・64→16→10のとき:96.2% ・64→32→16→10のとき:94.7% ・64→32→16→16→10のとき:95.6% ・64→32→32→16→16→10のとき:96.4% ・64→64→32→32→16→16→10のとき:93.3% ◆損失関数 ※最適化手法の定義の「optimizer = optim.SGD(net.parameters(), lr=0.01)の中のSGD」 ・SGD:94.7% ・Adam:96.9% ・Adam-AMSGrad:96.4% ・Adagrad:96.7% ・RSMprop:94.4% ※AMSGradは勾配のノルムの二乗から計算される値の最大値で勾配を抑えることでAdamの収束性を改善したものである。Adamの引数に「amsgrad=True」を入れて使い、今回の例では「Adam(net.parameters(), lr=0.01, amsgrad=True)」とする。 参考文献 scikit-learnで手書き文字認識 手書き数字がずのデータセットを機械学習で多クラス分類 matplotlibで出力される画像サイズを変更する matplotlibのグラフを入れ子に!subplotを使おう ニューラルネットワークによる手書き数字の認識 CNNをPyTorchのSequentialを使って実装する PyTorch reference pytorch for pythonにおける損失関数 機械学習における損失関数の役割や種類 Optimizerに関する備忘録
После установки рам со стеклопакетами между ними и стеной могут образовываться щели. Закрыть их помогут откосы на окна наружные металлические. Это не только элемент украшения, но и надежная защита от ветра и осадков. к содержанию ↑ Назначение приспособлений Наружные откосы из металла для пластиковых окон предназначены для защиты монтажной пены от попадания ультрафиолета, осадков, вынесения ее ветром. Если пену не закрывать, через несколько лет материал будет испорчен, в помещение начнет попадать холодный воздух, а стены станут сыреть, возрастет риск появления плесени. На новые окна после их установки необходимо установить откосы, чаще всего они тоже пластиковые, реже – металлические. Для облагораживания интерьера лучше использовать металл. Он способен надежно и качественно изолировать проем от наружных воздействий. Металлические откосы для пластиковых окон необходимо устанавливать сразу. Многие владельцы жилья этого не выполняют, считая, что пена справится без защиты. Это мнение неверное. Основные причины, по которым монтаж требуется сразу: Эстетически конструкция выглядит непривлекательно, неопрятным смотрится фасад здания. Монтажная пена придет в негодность под воздействием солнца, ветра и осадков. Пена не способна выступать как гидроизоляция, ее необходимо защищать от влаги, в противном случае влага будет впитываться, а в мороз окно промерзнет. Если установка металлических откосов по каким-то причинам не возможна сразу, например, здание ремонтируется, установленные окна со свежей пеной необходимо закрыть хотя бы полиэтиленовой пленкой, пока откосы не будут установлены. Готовые металлические откосы к содержанию ↑ Почему рекомендуется использовать для откосов металл? Для защиты монтажной пены можно использовать штукатурные смеси, пластиковые панели, лакокрасочные вещества, если работы ведутся внутри помещения, подойдет гипсокартон. Откосы из металла для фасадной стороны здания подходят в наибольшей степени, так как этот материал не горит, стоек к любым погодным условиям, если используется нержавеющая сталь, не придется беспокоиться и за процесс коррозии. Штукатурка может потрескаться, при контакте с рамой из ПВХ часто возникает конденсат, который вредит стенам и самой штукатурке, из-за него может образоваться плесень. Гипсокартон боится влаги, разрушается от нее, потому применять его можно только в сухом помещении; для улицы он непригоден, так как от осадков быстро деформируется, разбухнет и разрушится. Пластик может быть применен: он значительно дешевле металла, но стоит помнить о том, что этот материал не любит перепадов температуры, становится более хрупким. Механическое воздействие, которое оказывает ветер, может сломать такие панели. Металлические откосы выполняются обычно из оцинкованной стали: она не ржавеет, не боится механических воздействий. Сверху материал покрывают специальным защитным слоем из полимера. Он не только сохраняет сталь от воздействия природы, но и позволяет добиться аккуратного декоративного вида откосов. Материал для откосов – оцинкованная сталь к содержанию ↑ Преимущества Металлические оконные откосы благодаря свойствам стали и специальному покрытию надежно защищены от ржавчины. Срок эксплуатации практически не ограничен и превышает в разы пригодность аналогов. Благодаря современным полимерам откосы не придется красить. Монтаж металлических откосов производится легко и под силу начинающему. Привлекательный внешний вид, что позволяет сделать фасад завершенным с точки зрения дизайна. Атмосферные явления не смогут причинить материалу вреда. Металлические откосы на окна могут быть выполнены в различных цветовых гаммах, что позволяет подобрать их к любым окнам. Они подойдут как к стандартным белым или деревянным окнам, так и к необычным решениям. Возможно заказать нужный цвет и сделать оригинальный дизайн. Несмотря на то, что металл дороже аналогичных вариантов, он окупит себя, так как со временем откосы не придется менять. Если вам надоест дизайн, вы всегда сможете перекрасить откосы в нужный вам цвет. Замер проема перед установкой откосов к содержанию ↑ Как установить? Перед тем как монтировать металлические откосы для окон, необходимо правильно произвести замер проема. Делается это на уровне стеновой поверхности по наружному и внутреннему контуру. Затем нужные детали вырезают из гладкого листа металла. От лишней пены поможет избавиться канцелярский нож. Мусор нужно удалить, после этого швы заполняют герметиком на основе силикона: он поможет изолировать влагу. Необходимые инструменты к содержанию ↑ Последовательность работ Монтаж металлических откосов на пластиковые окна вполне возможно провести самостоятельно. Алгоритм действий: Начать работу с установки отлива. Монтировать его нужно при помощи уровня. Если вам не нравится шум при дожде, поверхность внутри можно заполнить мягким материалом. Изделие устанавливают на монтажную пену. Она наносится на металл и поверхность проема тонким слоем. Рекомендуется воспользоваться саморезами, привинтить отлив к раме. Чтобы откос снизу не поднял остальные, его нужно придавить тяжелым предметом. После этого можно начать устанавливать наружные откосы из металла. То, как вы проведете замеры, будет влиять на то, появятся ли щели на стыках элементов. Откосы легко закрепляются к оконной раме при помощи саморезов. Совет: не начинайте монтаж металлических откосов и отливов, если не уверены в том, что разметка сделана правильно. Рекомендуется сначала вырезать детали не из металла, а из бумаги. Для этого может потребоваться лист ватмана. Этот способ поможет проверить, правильно ли произведены измерения, подогнать выкройку по размеру, не испортив металлический лист. Если бумажный вариант по размерам подходит, можно приложить его к поверхности металла и обвести маркером, после чего вырезать нужную деталь. Монтаж металлических откосов Установку производят снизу вверх. Последним ставится верхний элемент. Если образовались щели, их заполняют монтажной пеной и герметиком. Отделка окон металлическими откосами вполне доступна для самостоятельного выполнения. Главным условием является правильный расчет материала и верные измерения. Многие люди после замены окон ограничиваются отделкой откосов и пространства под подоконником внутри помещения. Неотделанные оконные структуры плохо вписываются в интерьер. Вид снаружи не бросается в глаза, с улицы недоделки не видны. Монтажная пена надежно закрывает все щели. Такой ход мыслей ошибочен. Зачем отделывать наружные откосы? Ошибка заключается в том, что им приписывается исключительно декоративная функция. Действительно, красиво оформленное окно смотрится лучше. Пена, используемая при монтаже, обладает теплосберегающими, звукоизоляционными характеристиками. Не отличается устойчивостью к ультрафиолетовому излучению. Под его действием пена постепенно крошится, разрушается. Теряется способность защиты квартиры от холода, уличного шума. В конструкцию проникает влага, уменьшается срок службы. Заменив окно – не стоит медлить с откосами. Материалы для отделки Облицовку делают при помощи: штукатурки; гипсокартона; пластика; сэндвич панелей; термооткосов; металла. Преимущество штукатурки – доступность. Ее наносят в несколько слоев, пока не закроется пена. После нанесения слоя нужно подождать, пока он подсохнет, после этого наносить следующий. Потом, штукатурка шлифуется, красится. Недостаток – плохие теплосберегающие свойства. Ситуацию не спасает даже добавление теплоизоляционных веществ в цементный раствор. Штукатурка плохо прилипает к профилям окна. Постепенно отходит, разрушается под воздействием попадающей в щели воды. Гипсокартон применяют для внутренних отделочных работ. Можно использовать снаружи, если оконная конструкция выходит на застекленную лоджию, балкон, террасу. Не подходит для наружной отделки, поскольку не выносит влагу. При контакте с ней, гипсокартон деформируется, расслаивается, разрушается. Применение пластиковых откосов из поливинилхлорида (ПВХ) более обоснованно. Они хорошо гармонируют с окном, доступны, монтаж несложен, но не устойчивы к механическим воздействиям. Пластик, сделанный с нарушениями технологии производства, быстро трескается, теряет цвет. Сэндвич панели можно назвать неплохим выбором. Для откосов используют трехслойные конструкции. Два наружных слоя изготавливаются из плотного материала (поливинилхлорида), внутренний слой составляет пористый материал, обеспечивающий высокую звуко- теплоизоляцию. Однако, стоят дорого, монтировать их непросто. При нагревании панель деформируется. Термооткосы принадлежат к технологическим новинкам облицовки. Представляют собой пенопласт, покрытый мраморной крошкой, акриловыми связующими. Достоинств масса, но термооткос очень хрупок. При раскрое или монтаже легко повредить, поэтому установкой должны заниматься мастера. Металлические изделия набирают популярность. Стальная конструкция Достоинства и недостатки откосов из металла Изготавливаются из оцинкованной холоднокатаной стали, покрыты полимерным слоем. Сталь не боится ни дождя, ни мороза, ни жары. Оцинковка и полимерное покрытие защищают изделие от коррозии. Стандартные цвета – белый, коричневый (хорошо гармонирует с оконными профилями, сделанными «под дерево»). Металлические элементы оконного оформления могут быть любого цвета. Могут прослужить несколько десятилетий. Их легко монтировать. Уход не требует особых усилий. Грязь легко отмывается водой, разрешается использовать средства бытовой химии. Единственный общепризнанный недостаток – шумовой эффект во время дождя. Исправимо, если при монтаже установить шумопоглощающую уплотнительную ленту из бутилкаучука. Отличается водонепроницаемостью. Выбирают ленту от любого производителя, основное условие – должна подходить для наружных работ. Недостаток – цена. Выше, чем у пластиковых, но приемлема для покупателей. Потраченные деньги окупятся, поскольку откос, сделанный из металла, прослужит значительно дольше, чем пластиковый. Где применяются металлические откосы Ограничений по сфере применения нет. Можно использовать в городских квартирах, частных домах, общественных зданиях, офисах. Благодаря покрытию из полимеров и разнообразным цветам, они не смотрятся как типично металлические панели и сочетаются с пластиковыми и деревянными рамами окон. Монтаж металлической облицовки Мастера компаний по установке окон неохотно берутся за отделку откосов. Они считают, что стоимость работы не окупает затраченное время. Можно нанять индивидуально работающих специалистов. Если есть строительные навыки, можно поставить собственными силами, главное – не нарушать правила установки. Подготовка к монтажу Работа начинается с подготовки. Необходимо запастись материалом, инструментами, сделать замеры окна, подготовить рабочее место для монтажа. Инструменты и материалы Для монтажа понадобятся: измерительная рулетка; угольник; строительный уровень; шуруповерт; перфоратор; ножницы для резки металла; строительный нож; молоток; зубило; комплект откосов; герметик; монтажный пистолет; саморезы; шпатель; шпаклевка. Перед началом работ, нужно распаковать комплект, чтобы убедиться, что все металлические профили присутствуют, не являются бракованными. В стандартный комплект входят: отлив, верхний откос, два боковых, стартовые профили, крепеж. Правильный замер очень важен Замеры Необходимо замерить рулеткой высоту и ширину проема окна. К этому этапу установки необходимо отнестись внимательно. Для подстраховки, лучше проделать замеры дважды. Очистка и ремонт места Сначала, ножом срезаются излишки пены. Она должна быть на одном уровне с проемом. После этого, осматриваются поверхности проема. Дефекты (бугорки, трещины, дыры) должны быть устранены. Бугры удаляют зубилом и молотком. Щели замазывают шпаклевкой. Швы на стыках обрабатывают герметиком. После косметического ремонта проема, нужно подождать, чтобы материалы (герметик, шпаклевка) подсохли. Отливы разных расцветок Монтаж отлива Длина отлива больше ширины проема, это сделано специально. Ширину оконного проема переносят на металлическую панель отлива с припуском бокам. Отрезают, припуски режут так, чтобы на краях образовались «уши» в форме трапеции и треугольник, который будет прикрывать угол проема. Трапеции загибают вверх. После того, как отлив готов, его необходимо приложить к нижней части проема. Перед окончательным закреплением укладывается лента, поглощающая шум. Монтировать ее просто. От мотка отрезают кусок нужной длины. С клеящей стороны снимают защитную бумагу, приклеивают ленту. Концы заводят на боковые откосы. Лента обладает высокой степенью адгезии к разным поверхностям (бетонным, металлическим и т.д.), главное, чтобы на них не было пыли, мусора. Ленту нельзя красить, штукатурить, поскольку она потеряет рабочие свойства. Правильность установки относительно поверхности, проверяется уровнем. Если панель перекашивается, необходимо убрать перекос подручными материалами. Для регулирования высоты подойдут любые металлические или деревянные прокладки. Просверливают отверстия в отливе, основе. Крепят при помощи саморезов с дюбелями. Установка откосов Металлические элементы, входящие в фабричный комплект, покрывают защитной пленкой, чтобы они не повредились в процессе транспортировки, погрузочных работ, хранения. Перед монтажом, пленку необходимо снять. Отдирать ее после крепления элемента будет сложнее. Прежде чем начинать ставить и закреплять откосы, требуется установить начальные профили. Боковые примеряют к высоте окна. Карандашом или маркером отмечают место, до которого профили доходят по высоте. Потом, верхний прикладывают по ширине, если все подходит, закрепляют саморезами. Монтаж верхнего откоса Верхняя панель подрезается также как отлив, вставляется в закрепленный верхний профиль, прикручивается саморезами. Прикручиваются боковые профили, монтируются правый и левый откосы. Стыки промазываются герметиком, чтобы полностью исключить попадание воды под облицовку. Правильно установленные конструкции надежно защитят окна от погодных условий. Если нет уверенности в своих силах, лучше не рисковать. Можно потерять время, повредить панели. Чтобы избежать последствий, лучше обратиться к мастеру. Узнать больше о проведении замеров, можно из видео. Дополнительная информация о том, как устанавливают откосы, содержится в видеоматериале. Вконтакте Google+ Функции и назначение откосов из металла Металлический откос на окно Металлические наружные откосы на окна выполняют несколько основных функций: декоративная – окно с ними приобретает завершенный и аккуратный вид; защитная – защищают швы от негативного воздействия окружающей среды; герметизации – позволяют установить дополнительную теплоизоляцию и надежно защитить дом от проникновения холодного воздуха. Преимущества перед другими материалами В завершающей отделке оконного проема могут быть использованы различные материалы: штукатурка, которая со временем трескается, а под воздействием влаги в трещинах может появиться плесень. гипсокартон довольно быстро разбухает и разрушается. Использовать его можно только для внутренних работ. пластиковые панели хороши для внутренней отделки, но совершенно непригодны для наружных работ. Пластик на солнце выгорает, а от мороза становится хрупким. Перечисленных недостатков лишен металлический профиль, так как он устойчив к воздействию перепадов температуры и атмосферных осадков. Плюсы и минусы металлических откосов Можно выделить следующие плюсы этого материала: герметизация – защита швов от влаги и появления плесени; эксплуатация в течение не одного десятка лет; повышенная прочность и устойчивость к воздействию атмосферных явлений; простой уход – достаточно протирать откосы влажной тряпкой; стильный внешний вид. Минусы: не подходят под арочные или сложные проемы; высокая цена; требуют дополнительной шумоизоляции. Подготовка к установке Подготовительные работы включают следующие этапы: Удаление излишков монтажной пены. Шпаклевание стен при наличии трещин. Обработка стыков и поверхности герметиком, а стен – антисептиком. Монтаж откосов Металлические наружные откосы Как сделать откосы на окнах из металла с улицы? В первую очередь нужно перенести размеры на металлический сайдинг для изготовления нужных элементов. Монтаж металлического откоса на окно производится поэтапно: Детали устанавливаются с наклоном от окна. Размер элемента должен совпадать с размером рамы, допускается его уменьшение максимум на 1 см. При вырезке внешнего участка и боковых элементов нужно делать припуски на стену, которые затем загибаются. Водоотлив устанавливается в посадочное место и прикручивается к раме с помощью саморезов. Боковины также крепятся на саморезы так, чтобы закрыть отрезки отлива. Верхний элемент выравнивается по внешним углам и боковым частям. Все элементы фиксируются герметиком, также в процессе монтажа используется уплотнительная лента для лучшей теплоизоляции. Купить недорого наружные откосы из металла на окна в Москве Вы можете у нас, оставив заявку на нашем сайте. Одним из самых долговечных, надёжных и простых в монтаже фасадных материалов является оцинкованный профнастил, которому всё чаще отдают предпочтение при строительстве быстровозводимых сооружений. Завершающим этапом создания фасада из профилированного листа служит монтаж широкого спектра доборных элементов, используемых для оформления мест стыка, углов здания, рельефов, карнизов, дверных и оконных проёмов. Среди всех случаев использования доборных элементов особую сложность представляет собой оформление оконных проёмов, ведь в данной ситуации требуется не только качественно обработать места стыков, но и обеспечить их защиту от попадания влаги и загрязнений. Как уложить профнастил вокруг окон? Расстояния от угла до окна Расстояния между окнами (если их несколько) Ширины и высоты окон Высоты расположения окон Расстояния между верхней частью окна и крышей Для обрамления окна используются горизонтальные перемычки - боковые полочки, изготовленные из профлиста с помощью ножниц по металлу. С одной стороны они крепятся к дому, в то время как противоположная поверхность используется для крепления металлопрофиля. Отдельную задачу представляет собой подгонка металлопрофиля вокруг окон. Начинается она с примерки листа профнастила, который необходимо расположить так, чтобы его края выступали с двух сторон оконного проёма. После этого на листе отмечают ширину окна, добавив к ней 6-10 см - полученные таким образом отметки покажут места проведения вертикальных отрезов. Чтобы изготовить образец горизонтального отреза, необходимо укрепить небольшой кусок металлопрофиля возле окна и сделать на нём отметки, расположенные на 6 см ниже подоконника по обеим сторонам оконного проёма, на случай, если горизонтальный уровень окна не идеален. Перенеся эти отметки с образца на лист металлического профиля, можно выполнить необходимый рез. Внимание! При проведении реза абразивным кругом ( «болгаркой») происходит повреждение цинкового и полимерного покрытий профнастила, что чревато возникновением ускоренной коррозии. Поэтому для резки листов рекомендуется использовать только ножницы по металлу. Обычно окна, за исключением наружных откосов, устанавливают на фасад до начала монтажа профнастила. Промежуток между стеной и окном - его ещё называют монтажным швом - после закрепления окна заполняют полиуретановой монтажной пеной. Сама пена защищается от внешних воздействий с помощью лент - для наружной защиты это водоизоляционная диффузионная лента и паропроницаемая лента ПСУЛ, в то время как для внутренней защиты используются пароизоляционные дублированная и металлизированная ленты. Снаружи монтажная пена в обязательном порядке должна быть защищена от солнца, поскольку она разрушается от УФ-лучей, а также от влаги, которая ухудшает её долговечность и теплоизоляцию. При этом водяной конденсат, образуемый в порах пены, должен выводиться наружу, то есть лента должна быть паропроницаемой. Как установить наружные откосы и слив на окнах? Отделка наружных оконных откосов - это не только дизайнерские причуды, но и, в первую очередь, серьёзный вклад в обеспечение утепления стен и окон. После установки окна монтаж наружных откосов категорически нельзя откладывать надолго, так как отсутствие грамотной защиты монтажных швов от влаги существенно ухудшает теплоизоляционные качества оконного проёма. Среди множества материалов, используемых для изготовления наружных откосов, наибольшей популярностью пользуются пластик и металл. Откосы из пластика идеально дополняют металлопластиковые окна, с которыми они смотрятся как одно целое. Главное преимущество пластиковых откосов - простота установки, главный недостаток - высокая стоимость. Другие достоинства наружных откосов из пластика - невосприимчивость к изменениям погодных условий, уровню влажности и отсутствие деформации на морозе. Пластик, из которого изготавливаются откосы, имеет ровную структуру, что позволяет не выравнивать их поверхность перед отделкой. Кроме того, пластиковые оконные откосы отличаются высокими тепло- и звукоизолирующими качествами и отлично совмещаются с любым типом утеплителя. Металл с полимерным покрытием, используемый для наружных оконных откосов, имеет самые высокие теплоизолирующие функции, кроме того, он не поддаётся коррозии, имеет продолжительный срок службы и обеспечивает стенам отличную защиту от продувания и замерзания. Отделка внешних откосов металлом - пожалуй, самый популярный вариант, поэтому коснёмся его более подробно. Первый этап установки откоса - измерение оконного проёма, исходя из чего необходимо изготовить из листа оцинкованного металла с полимерным покрытием откос в форме буквы Z (или же купить готовый). С помощью саморезов откос из металла закрепляется на оконной раме, а все швы затем обрабатываются силиконовым герметиком. Обычно оттенок откосов с полимерным покрытием подбирается в тон с окнами, крышей или фасадом. Откос дополняют оконным водоотливом, который предназначен для отделки нижней части внешнего оконного откоса и его защиты от попадания атмосферной влаги вследствие дождя или снега. Подробней о наружной отделке окон металлическими откосами смотрите в этом видео: Все доборные элементы, необходимые для оформления самих окон с четырёх сторон, часто продаются в наборе и образуют так называемый короб, который крепится саморезами или заклёпками через каждые 300-500 мм к оконному блоку. Кроме обязательных наружного откоса и отлива, для оформления окон используются такие доборники, как наружные и внутренние углы, планки аквилон и наличники. При наличии простых ручных гибочных инструментов большинство доборных элементов можно изготовить самостоятельно. Например, в этом небольшом мастер-классе рассказывается, как сделать оконный отлив своими руками: Элементы оформления окон служат для декорирования и улучшения эксплуатационных качеств, надежно защищают проем и помещения от тепловых потерь, сквозняков и влажности. Откосы на окна могут быть выполнены из разных материалов. Виды: Штукатурка.Экономичные откосы на окна, все реже применяются для отделки, так как недолговечны. Пластик.Оптимальное решение для окон ПВХ, имеют сравнительно высокую стоимость. Долговечный, простой в монтаже, стойкий ко многим воздействиям вариант. Сэндвич панели.Высокие эстетические качества, защита от многих внешних воздействий, длительный срок эксплуатации. Низкая цветоустойчивость при воздействии ультрафиолета (бюджетные материалы). Металл.Высокая стойкость к атмосферным воздействиям, презентабельный внешний вид благодаря наличию декоративно-защитного слоя, простота эксплуатации и монтажа. Высокая стоимость, как главный недостаток. Пенопласт.Высокие звуко- и теплоизоляционные свойства, защита от грибка и плесени, оригинальные цветовые решения, простота установки. Гипсокартон.Экономичные откосы на окна, простые в монтаже. Низкая стойкость к влажности, используются только в помещении. Сегодня потребительский спрос особо высок на пласт Уклон (м) линии (Координатная Геометрия) Определение: наклон линии - это число, которое измеряет ее «крутизну», обычно обозначаемую буквой m. Это изменение у для единичного изменения х вдоль линии. Попробуй это Отрегулируйте линию ниже, перетаскивая оранжевую точку в точке A или B. Наклон линии непрерывно пересчитывается. Вы также можете перетащить исходную точку в (0,0). Наклон линии (также называемый градиентом линии) - это число, которое описывает, насколько оно «крутое». На рисунке выше нажмите «сброс». Обратите внимание, что для каждого увеличения на одну единицу вправо вдоль горизонтальной оси X, линия опускается на половину единицы. Поэтому он имеет наклон -0,5. Чтобы пройти от точки А к В вдоль линии, нам нужно переместиться вправо на 30 единиц и вниз на 15. Опять же, это половина единицы вниз на каждую единицу в поперечнике. Поскольку линия наклонена вниз вправо, она имеет отрицательный наклон.При увеличении х у уменьшается .Если линия наклонена вверх вправо, наклон будет положительным числом. Отрегулируйте точки выше, чтобы создать положительный наклон. Формула для склона Учитывая любые две точки на линии, ее наклон задается по формуле , где: A x - это координата x точки A A и - это координата y точки A B x - это координата x точки B B и - это координата y точки B Пример На диаграмме вверху страницы нажмите «сброс». Подставив координаты для A и B в формулу, получим Нахождение наклона линии осмотром Вместо того, чтобы просто вставлять числа в формулу выше, мы можем найти наклон, поняв концепцию и обосновав ее. Обратитесь к строке ниже, определяемой двумя точками A, B.Мы можем видеть, что линия наклонена вверх и вправо, поэтому наклон будет положительным. Рассчитайте dx, горизонтальное расстояние от левой точки до правой точки. Поскольку B находится в точке (15,5), его x-координата является первым числом, 15. X-координата A равна 30. Таким образом, разница (dx) составляет 15. Вычислите dy, количество, на которое линия поднимается или опускается при движении вправо. Так как B в (15,5)его координата у - второе число или 5.Y-координата A равна 25.Таким образом, разница (dy) составляет +20. Это положительно, потому что линия идетдо, как вы идете направо. В противном случае было бы негативно. Деление подъема (dy) на пробег (dx): Способ помнить этот метод - «подняться над пробегом». Это «подъем» - разница между точками вверх и вниз, над «бегом» - горизонтальный пробег между ними. Просто помните, что подъем, идущий вниз, отрицателен. Направление склона Наклон линии может быть положительным, отрицательным, нулевым или неопределенным. Положительный уклон Здесь y увеличивается нас увеличением x, поэтому линия наклоняется вверх вправо. Склон будет быть положительным числом. Линия справа имеет наклон около +0,3, она идет от допримерно на 0,3 за каждый шаг 1 вдоль оси x. Отрицательный уклон Здесь y уменьшается нас увеличением x, поэтому линия наклоняется вниз вправо. Склон будет быть отрицательным числом. Линия справа имеет наклон около -0.3, он уменьшается на напримерно на 0,3 для каждого шага 1 вдоль оси х. Нулевой уклон Здесь y не изменяется напри увеличении x, поэтому линия точно горизонтальна. Склон любой горизонтальной линии всегда равен нулю. Линия справа не идет ни вверх, ни вниз по мере увеличения x, поэтому ее наклон равен нулю. Неопределенный уклон Когда линия точно вертикальная, у нее нет определенного наклона. Две координаты x одинаковы, поэтому разница равна нулю.Тогда расчет уклона Когда вы делите что-либо на ноль, результат не имеет смысла. Линия выше точно вертикальная, поэтому она не имеет определенного наклона. Мы говорим: «наклон линии AB не определен». Вертикальная линия имеет уравнение вида x = a, где a - точка пересечения x. Подробнее об этом смотрите Наклон вертикальной линии. Уравнение прямой Наклон m линии является одним из элементов в уравнении линии, когда он записан в форме «наклон и перехват»: . y = mx + b в уравнении - это наклон линии, описанной здесь. м Подробнее об этом см .: Наклон как угол Наклон линии также может быть выражен в виде угла, обычно в градусах или радианах. На рисунке выше нажмите «показать угол». По соглашению угол измеряется от любой горизонтальной линии (параллельно оси X). Линии с положительным наклоном (вверх и вправо) имеют положительный угол и отрицательный угол для отрицательного наклона.Измените наклон, перетаскивая A или B, и убедитесь сами. Чтобы преобразовать угол наклона m в угол наклона и обратно: угол = арктан (м) м = загар (угол) Тан и его обратный арктан описаны в Тригонометрия Обзор Что попробовать На приведенной выше диаграмме перетащите точки A и B вокруг и обратите внимание на то, как изменяется вычисленный уклон. Попробуйте получить положительный, отрицательный, нулевой и неопределенный уклон Нажмите «Скрыть детали».Перетащите A и B в новые места и рассчитайте наклон линии самостоятельно. Затем нажмите «показать детали» и посмотрите, насколько близко вы подошли. Для получения бонуса оцените наклон по двум точкам на линии по вашему выбору, а не по А и В. Отрегулируйте точки A и B, чтобы получить наклон +1 и -1. Что вы заметили на склоне? (Ответ: уклон 45 ° - линия на полпути между вертикалью и горизонталью). Нажмите «Показать угол» для подтверждения. Ограничения В целях ясности в апплете выше координаты округлены до целых чисел, а длины округлены до одного десятичного знака.Это может привести к тому, что расчеты будут немного отклонены. Для более см. Учебные заметки Другие темы координатной геометрии (C) 2011 Copyright Math Открытая ссылка. Все права защищены Товары Клиенты Случаи использования Переполнение стека Публичные вопросы и ответы Команды Частные вопросы и ответы для вашей команды предприятие Частные вопросы и ответы для вашего предприятия работы Программирование и связанные с ним технические возможности карьерного роста Талант Нанимать технический талант реклама Связаться с разработчиками по всему миру установить pip3 установить pywin32 pyinstallerpip3 install --upgrade setuptools pyinstall -F demo.py ошибка pyinstaller AttributeError: у объекта 'str' нет атрибута 'items' решение: pip3 install --upgrade setuptools использование pyinstaller -h параметров: Общие параметры -y, –noconfirm Заменить выходной каталог (по умолчанию: SPECPATH / dist / SPECNAME) без запроса подтверждения –upx-dir UPX_DIR Путь к утилите UPX (по умолчанию: поиск пути выполнения) –clean Очистите кэш PyInstaller и удалите временные файлы перед сборкой. - уровень журнала LEVEL Количество деталей в сообщениях консоли во время сборки. УРОВЕНЬ может быть одним из TRACE, DEBUG, INFO, WARN, ERROR, CRITICAL (по умолчанию: INFO). Что генерировать -D, --onedir Создать пакет из одной папки, содержащий исполняемый файл (по умолчанию) -F, --onefile Создайте исполняемый файл с одним файлом. Что связывать, где искать --add-data Эта опция может быть использована несколько раз.--add-binary Эта опция может быть использована несколько раз. -p DIR, --paths DIR Путь для поиска импорта (например, с использованием PYTHONPATH). Допускается использование нескольких путей, разделенных символом «:», или использование эта опция несколько раз --hidden-import MODULENAME, --hiddenimport MODULENAME Назовите импорт, не видимый в коде скрипта (ов). Эта опция может быть использована несколько раз. Специальные опции для Windows и Mac OS X -c, --console, --nowindowed Откройте консольное окно для стандартного ввода / вывода (по умолчанию) -w, - оконный, --noconsole Windows и Mac OS X: не предоставляют окно консоли для стандартного ввода / вывода Пример использовать pyd путь, связанный с исходным кодом sys.path.append ( './ SDK / superdog /') импорт супердог команд pyinstaller pyinstaller -y -D --path = "sdk / superdog" demo.py создает папку build и dist , а также demo.spec output 78 INFO: расширение PYTHONPATH путями[ 'E: \\ мерзавец \\ питон \\ HelloWorld', 'E: \\ мерзавец \\ питон \\ HelloWorld \\ \\ superdog SDK', 'E: \\ мерзавец \\ питон \\ HelloWorld'] demo.spec block_cipher = Нет a = Анализ (['демо.ру '], pathex = ['sdk / superdog', 'E: \\ git \\ python \\ helloworld'], двоичные файлы = [], Данные = [], hiddenimports = [], hookspath = [], runtime_hooks = [], исключает = [], win_no_prefer_redirects = False, win_private_assemblies = False, Шифр = block_cipher, noarchive = False) pyz = PYZ (a.pure, a.zipped_data, Шифр = block_cipher) exe = EXE (pyz, a.scripts, [], exclude_binaries = True, имя = «демо», отлаживать = False, bootloader_ignore_signals = False, полоса = False, UPX = True, console = True) coll = COLLECT (exe, а.двоичные файлы, a.zipfiles, a.datas, полоса = False, UPX = True, Name = 'демо') просмотр build / demo / xref-demo.html обнаружен супердог. Советы: , если мы используемpyinstaller -y -D demo.pyне включать--path = "sdk / superdog", пакет будет отсутствовать, и при запуске исполняемого файла произойдет ошибка. run исполняемый файл cd dist / demo,/demo.exe все связанные библиотеки были скопированы в папку dist / demo /, например, .cublas64_80.dll, curand64_80.dll, cudart64_80.dll, cudnn64_6.dll используют библиотеки DLL ctype , связанные с путями исходный код sys.path.append ('./ sdk / superdog /') sys.path.append ( './ SDK / обнаружить /') импорт супердог обнаружение импорта CDLL относи .
Traceback (most recent call last): File "/usr/bin/openlp", line 6, in <module> from pkg_resources import load_entry_point File "/usr/lib/python3/dist-packages/pkg_resources/_init_.py", line 3250, in <module> @_call_aside File "/usr/lib/python3/dist-packages/pkg_resources/_init_.py", line 3234, in _call_aside f(*args, **kwargs) File "/usr/lib/python3/dist-packages/pkg_resources/_init_.py", line 3263, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python3/dist-packages/pkg_resources/_init_.py", line 583, in _build_master ws.require(_requires_) File "/usr/lib/python3/dist-packages/pkg_resources/_init_.py", line 900, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python3/dist-packages/pkg_resources/_init_.py", line 786, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'OpenLP==2.5.dev2899' distribution was not found and is required by the application I'm happy to see the first public Step to V3.0. Thank you!
blavaan is a free, open source R package for Bayesian latent variable analysis. It relies on JAGS and Stan to estimate models via MCMC. The blavaan functions and syntax are similar to lavaan. For example, consider the Political Democracy example from Bollen (1989): library(blavaan) model <- ' # latent variable definitions ind60 =~ x1 + x2 + x3 dem60 =~ y1 + y2 + y3 + y4 dem65 =~ y5 + y6 + y7 + y8 # regressions dem60 ~ ind60 dem65 ~ ind60 + dem60 # residual covariances y1 ~~ y5 y2 ~~ y4 + y6 y3 ~~ y7 y4 ~~ y8 y6 ~~ y8 ' fit <- bsem(model, data = PoliticalDemocracy) summary(fit) The development version of blavaan (containing updates not yet on CRAN) can be installed via either of the following commands. Compilation is required in both cases; this may be a problem for users who currently rely on a binary version of blavaan from CRAN. # from github: remotes::install_github("ecmerkle/blavaan", INSTALL_opts = "--no-multiarch") # from website: install.packages("blavaan", repos = "http://faculty.missouri.edu/~merklee", type = "source") For further information, see: Merkle, E. C., & Rosseel, Y. (2018). blavaan: Bayesian structural equation models via parameter expansion. Journal of Statistical Software, 85(4), 1–30.
我们在编写多线程程序的时候,往往会遇到两种类型的变量。 ä¸€ç§æ˜¯å ¨å±€å˜é‡ï¼Œå¤šä¸ªçº¿ç¨‹å ±äº«ã€‚ä¸ºäº†é¿å æ”¹ä¹±ä¸ºï¼Œæˆ‘ä»¬åœ¨å‰é¢å·²ç»æåˆ°è¯´è¦åŠ é”ã€‚ ä¸€ç§æ˜¯å±€éƒ¨å˜é‡ã€‚ä» ä¾›ä¸€ä¸ªçº¿ç¨‹ä½¿ç”¨ï¼Œçº¿ç¨‹é—´ç›¸äº’ä¸å½±å“ã€‚ 例如下列程序中task()函数中定义的count变量就是局部变量。即使我们创建了两个线程,两者的counté€’å¢žä¹Ÿä¸ä¼šç›¸äº’å½±å“ï¼Œå› ä¸ºcount是在task中定义的。 import threading def task(): count = 0 for i in range(1000): count += 1 print count if __name__ == '__main__': t1 = threading.Thread(target=task) t1.start() t2 = threading.Thread(target=task) t2.start() 那么,这么处理是不是就完美了呢?其实还不是。 以上的例子我们举的是一个非常简单的例子,但是我们遇到一个比较复杂的业务逻辑的时候,比如多个局部变量,函数多重调用等,这么定义局部变量就会变得不简洁,麻烦。 函数多重调用是指,例如: 我们定义了函数,methodA(),这个方法体内调用了methodB(), methodB()方法体中又调用了methodC()... 如果我们在某一个线程中调用了methodA()并且使用了一个变量attr,那么我们就需要将atträ¸€å±‚ä¸€å±‚åœ°ä¼ é€’ç»™åŽç»­çš„å‡½æ•°ã€‚ 有没有一种方法,能让我们在线程中定义一个变量后,那么这个线程中的函数就都能调用,如此才叫简洁明了? Python为我们做到了,那就是ThreadLocal. ThreadLocal的用法只需要三步: 定义一个对象 threading.local åœ¨çº¿ç¨‹å† ç»™è¯¥å¯¹è±¡ç»‘å®šå‚æ•°ã€‚æ‰€æœ‰ç»‘å®šçš„å‚æ•°éƒ½æ˜¯çº¿ç¨‹éš”ç¦»çš„ã€‚ åœ¨çº¿ç¨‹å† è°ƒç”¨ã€‚ ä¸‹é¢å±•ç¤ºä¸€ä¸‹ä»£ç ï¼š # coding=utf-8 import threading local = threading.local() # 创建一个全局的对象 def task(): local.count = 0 # 初始化一个线程内变量,该变量线程间互不影响。 for i in range(1000): count_plus() def count_plus(): local.count += 1 print threading.current_thread().name, local.count if __name__ == '__main__': t1 = threading.Thread(target=task) t1.start() t2 = threading.Thread(target=task) t2.start()
Starting a Rails console session Active Record objects At the heart of GitLab is a web application built using the Ruby on Rails framework. Thanks to this, we also get access to the amazing tools built right into Rails. In this guide, we’ll introduce the Rails console and the basics of interacting with your GitLab instance from the command line. Caution:The Rails console interacts directly with your GitLab instance. In many cases, there are no handrails to prevent you from permanently modifying, corrupting or destroying production data. If you would like to explore the Rails console with no consequences, you are strongly advised to do so in a test environment. This guide is targeted at GitLab system administrators who are troubleshooting a problem or need to retrieve some data that can only be done through direct access of the GitLab application. Basic knowledge of Ruby is needed (try this 30-minute tutorial for a quick introduction). Rails experience is helpful to have but not a must. Your type of GitLab installation determines how to start a rails console. The following code examples will all take place inside the Rails console and also assume an Omnibus GitLab installation. Under the hood, Rails uses Active Record,an object-relational mapping system, to read, write and map application objectsto the PostgreSQL database. These mappings are handled by Active Record models,which are Ruby classes defined in a Rails app. For GitLab, the model classescan be found at /opt/gitlab/embedded/service/gitlab-rails/app/models. Let’s enable debug logging for Active Record so we can see the underlying database queries made: ActiveRecord::Base.logger = Logger.new(STDOUT) Now, let’s try retrieving a user from the database: user = User.find(1) Which would return: D, [2020-03-05T16:46:25.571238 #910] DEBUG -- : User Load (1.8ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1 => #<User id:1 @root> We can see that we’ve queried the users table in the database for a row whoseid column has the value 1, and Active Record has translated that databaserecord into a Ruby object that we can interact with. Try some of the following: user.username user.created_at user.admin By convention, column names are directly translated into Ruby object attributes,so you should be able to do user.<column_name> to view the attribute’s value. Also by convention, Active Record class names (singular and in camel case) mapdirectly onto table names (plural and in snake case) and vice versa. For example,the users table maps to the User class, while the application_settingstable maps to the ApplicationSetting class. You can find a list of tables and column names in the Rails database schema,available at /opt/gitlab/embedded/service/gitlab-rails/db/schema.rb. You can also look up an object from the database by attribute name: user = User.find_by(username: 'root') Which would return: D, [2020-03-05T17:03:24.696493 #910] DEBUG -- : User Load (2.1ms) SELECT "users".* FROM "users" WHERE "users"."username" = 'root' LIMIT 1 => #<User id:1 @root> Give the following a try: User.find_by(email: 'admin@example.com') User.where.not(admin: true) User.where('created_at < ?', 7.days.ago) Did you notice that the last two commands returned an ActiveRecord::Relationobject that appeared to contain multiple User objects? Up to now, we’ve been using .find or .find_by, which are designed to returnonly a single object (notice the LIMIT 1 in the generated SQL query?)..where is used when it is desirable to get a collection of objects. Let’s get a collection of non-admin users and see what we can do with it: users = User.where.not(admin: true) Which would return: D, [2020-03-05T17:11:16.845387 #910] DEBUG -- : User Load (2.8ms) SELECT "users".* FROM "users" WHERE "users"."admin" != TRUE LIMIT 11 => #<ActiveRecord::Relation [#<User id:3 @support-bot>, #<User id:7 @alert-bot>, #<User id:5 @carrie>, #<User id:4 @bernice>, #<User id:2 @anne>]> Now, try the following: users.count users.order(created_at: :desc) users.where(username: 'support-bot') In the last command, we see that we can chain .where statements to generatemore complex queries. Notice also that while the collection returned containsonly a single object, we cannot directly interact with it: users.where(username: 'support-bot').username Which would return: Traceback (most recent call last): 1: from (irb):37 D, [2020-03-05T17:18:25.637607 #910] DEBUG -- : User Load (1.6ms) SELECT "users".* FROM "users" WHERE "users"."admin" != TRUE AND "users"."username" = 'support-bot' LIMIT 11 NoMethodError (undefined method `username' for #<ActiveRecord::Relation [#<User id:3 @support-bot>]>) Did you mean? by_username We need to retrieve the single object from the collection by using the .firstmethod to get the first item in the collection: users.where(username: 'support-bot').first.username We now get the result we wanted: D, [2020-03-05T17:18:30.406047 #910] DEBUG -- : User Load (2.6ms) SELECT "users".* FROM "users" WHERE "users"."admin" != TRUE AND "users"."username" = 'support-bot' ORDER BY "users"."id" ASC LIMIT 1 => "support-bot" For more on different ways to retrieve data from the database using Active Record, please see the Active Record Query Interface documentation. In the previous section, we learned about retrieving database records using Active Record. Now, we’ll learn how to write changes to the database. First, let’s retrieve the root user: user = User.find_by(username: 'root') Next, let’s try updating the user’s password: user.password = 'password'user.save Which would return: Enqueued ActionMailer::MailDeliveryJob (Job ID: 05915c4e-c849-4e14-80bb-696d5ae22065) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", #<GlobalID:0x00007f42d8ccebe8 @uri=#<URI::GID gid://gitlab/User/1>> => true Here, we see that the .save command returned true, indicating that thepassword change was successfully saved to the database. We also see that the save operation triggered some other action – in this case a background job to deliver an email notification. This is an example of an Active Record callback – code which is designated to run in response to events in the Active Record object life cycle. This is also why using the Rails console is preferred when direct changes to data is necessary as changes made via direct database queries will not trigger these callbacks. It’s also possible to update attributes in a single line: user.update(password: 'password') Or update multiple attributes at once: user.update(password: 'password', email: 'hunter2@example.com') Now, let’s try something different: # Retrieve the object again so we get its latest state user = User.find_by(username: 'root') user.password = 'password' user.password_confirmation = 'hunter2' user.save This returns false, indicating that the changes we made were not saved to thedatabase. You can probably guess why, but let’s find out for sure: user.save! This should return: Traceback (most recent call last): 1: from (irb):64 ActiveRecord::RecordInvalid (Validation failed: Password confirmation doesn't match Password) Aha! We’ve tripped an Active Record Validation. Validations are business logic put in place at the application-level to prevent unwanted data from being saved to the database and in most cases come with helpful messages letting you know how to fix the problem inputs. We can also add the bang (Ruby speak for !) to .update: user.update!(password: 'password', password_confirmation: 'hunter2') In Ruby, method names ending with ! are commonly known as “bang methods”. Byconvention, the bang indicates that the method directly modifies the object itis acting on, as opposed to returning the transformed result and leaving theunderlying object untouched. For Active Record methods that write to thedatabase, bang methods also serve an additional function: they raise anexplicit exception whenever an error occurs, instead of just returning false. We can also skip validations entirely: # Retrieve the object again so we get its latest state user = User.find_by(username: 'root') user.password = 'password' user.password_confirmation = 'hunter2' user.save!(validate: false) This is not recommended, as validations are usually put in place to ensure the integrity and consistency of user-provided data. Note that a validation error will prevent the entire object from being saved to the database. We’ll see a little of this in the next section. If you’re getting a mysterious red banner in the GitLab UI when submitting a form, this can often be the fastest way to get to the root of the problem. At the end of the day, Active Record objects are just normal Ruby objects. As such, we can define methods on them which perform arbitrary actions. For example, GitLab developers have added some methods which help with two-factor authentication: def disable_two_factor! transaction do update( otp_required_for_login: false, encrypted_otp_secret: nil, encrypted_otp_secret_iv: nil, encrypted_otp_secret_salt: nil, otp_grace_period_started_at: nil, otp_backup_codes: nil ) self.u2f_registrations.destroy_all # rubocop: disable DestroyAll end end def two_factor_enabled? two_factor_otp_enabled? || two_factor_u2f_enabled? end (See: /opt/gitlab/embedded/service/gitlab-rails/app/models/user.rb) We can then use these methods on any user object: user = User.find_by(username: 'root') user.two_factor_enabled? user.disable_two_factor! Some methods are defined by gems, or Ruby software packages, which GitLab uses. For example, the StateMachines gem which GitLab uses to manage user state: state_machine :state, initial: :active do event :block do ... event :activate do ...end Give it a try: user = User.find_by(username: 'root') user.state user.block user.state user.activate user.state Earlier, we mentioned that a validation error will prevent the entire object from being saved to the database. Let’s see how this can have unexpected interactions: user.password = 'password'user.password_confirmation = 'hunter2'user.block We get false returned! Let’s find out what happened by adding a bang as we didearlier: user.block! Which would return: Traceback (most recent call last): 1: from (irb):87 StateMachines::InvalidTransition (Cannot transition state via :block from :active (Reason(s): Password confirmation doesn't match Password)) We see that a validation error from what feels like a completely separate attribute comes back to haunt us when we try to update the user in any way. In practical terms, we sometimes see this happen with GitLab admin settings – validations are sometimes added or changed in a GitLab update, resulting in previously saved settings now failing validation. Because you can only update a subset of settings at once through the UI, in this case the only way to get back to a good state is direct manipulation via Rails console. Get a user by primary email address or username: User.find_by(email: 'admin@example.com') User.find_by(username: 'root') Get a user by primary OR secondary email address: User.find_by_any_email('user@example.com') Note: find_by_any_email is a custom method added by GitLab developers ratherthan a Rails-provided default method. Get a collection of admin users: User.admins Note: admins is a scope convenience methodwhich does where(admin: true) under the hood. Get a project by its path: Project.find_by_full_path('group/subgroup/project') Note: find_by_full_path is a custom method added by GitLab developers ratherthan a Rails-provided default method. Get a project’s issue or merge request by its numeric ID: project = Project.find_by_full_path('group/subgroup/project') project.issues.find_by(iid: 42) project.merge_requests.find_by(iid: 42) Note: iid means “internal ID” and is how we keep issue and merge request IDsscoped to each GitLab project. Get a group by its path: Group.find_by_full_path('group/subgroup') Get a group’s related groups: group = Group.find_by_full_path('group/subgroup') # Get a group's parent group group.parent # Get a group's child groups group.children Get a group’s projects: group = Group.find_by_full_path('group/subgroup') # Get group's immediate child projects group.projects # Get group's child projects, including those in sub-groups group.all_projects Get CI pipeline or builds: Ci::Pipeline.find(4151) Ci::Build.find(66124) Note: The pipeline and job #ID numbers increment globally across your GitLab instance, so there’s no need to use an internal ID attribute to look them up, unlike with issues or merge requests. Get the current application settings object: ApplicationSetting.current
Description Write a program to solve a Sudoku puzzle by filling the empty cells. A sudoku solution must satisfy all of the following rules: Each of the digits 1-9must occur exactly once in each row. Each of the digits 1-9must occur exactly once in each column. Each of the the digits 1-9must occur exactly once in each of the 93x3sub-boxes of the grid. Empty cells are indicated by the character '.'. A sudoku puzzle… …and its solution numbers marked in red. Note: The given board contain only digits 1-9and the character'.'. You may assume that the given Sudoku puzzle will have a single unique solution. The given board size is always 9x9. Explanation backtracking Python Solution class Solution: def solveSudoku(self, board: List[List[str]]) -> None: """ Do not return anything, modify board in-place instead. """ board_assignment = self.get_board_assignment(board) self.backtracking_helper(board, 0, board_assignment) def backtracking_helper(self, board2d, index, board_assignment): if index == 81: return True row_index = index // 9 column_index = index % 9 if board2d[row_index][column_index] != '.': return self.backtracking_helper(board2d, index + 1, board_assignment) unassigned_variables = [i for i in range(1, 10)] for value in unassigned_variables: value = str(value) if not self.is_valid_assignment(row_index, column_index, value, board_assignment): continue self.add_value_to_assignment(board2d, board_assignment, column_index, value, row_index) if self.backtracking_helper(board2d, index + 1, board_assignment): return True self.remove_value_from_assignment(board2d, board_assignment, column_index, value, row_index) return False def remove_value_from_assignment(self, board2d, board_assignment, column_index, value, row_index): board2d[row_index][column_index] = '.' del board_assignment['columns'][column_index][value] del board_assignment['rows'][row_index][value] del board_assignment['boxes'][row_index // 3 * 3 + column_index // 3][value] def add_value_to_assignment(self, board2d, board_assignment, column_index, value, row_index): board2d[row_index][column_index] = value board_assignment['rows'][row_index][value] = True board_assignment['columns'][column_index][value] = True board_assignment['boxes'][row_index // 3 * 3 + column_index // 3][value] = True def is_valid_assignment(self, row_index, column_index, digit, board_assignment): if digit in board_assignment['rows'][row_index]: return False if digit in board_assignment['columns'][column_index]: return False if digit in board_assignment['boxes'][row_index // 3 * 3 + column_index // 3]: return False return True def get_board_assignment(self, board2d): rows = [{} for i in range(0, 9)] columns = [{} for j in range(0, 9)] boxes = [{} for k in range(0, 9)] for i in range(0, len(board2d)): for j in range(0, len(board2d[0])): value = board2d[i][j] box_index = (i // 3) * 3 + j // 3 if value != '.': rows[i][value] = True columns[j][value] = True boxes[box_index][value] = True board_assignment = {'rows': rows, 'columns': columns, 'boxes': boxes} return board_assignment Time complexity: O((9!)^9). Space complexity: the board size is fixed, and the space is used to store board, rows, columns and boxes structures, each contains 81 elements
Perhaps the best known fractal of all: the Mandelbrot set. Since I was already working on Python code that would render an image given a function (for a future post), I figured that I might as well render fractals with it. The basic idea is simple. Use pillow (the successor PIL), create an empty image of a given size. Then, call a given function for each point in that image, passing the x and y coordinates of the function as parameters. Basically, the build-flomap* function I use all the time in Racket. It turns out, that’s actually really straight forward: def generate_image(width, height, generator): ''' Generate an RGB image using a generator function. width, height -- the size of the generated image generator -- a function that takes (x, y) and returns (r, g, b) ''' # Generate the data as a row-major list of (r, g, b) data = [generator(x, y) for y in range(height) for x in range(width)] # Pack that into a Pillow image and return it img = PIL.Image.new('RGB', (width, height)) img.putdata(data) return img One downside of this is that it’s relatively slow (at least on the big multi-core machines we have now). Luckily, we can use the <a href="https://docs.python.org/2/library/multiprocessing.html">multiprocessing</a> module to speed things up: def generate_image(width, height, generator, threads = 1): ''' Generate an RGB image using a generator function. width, height -- the size of the generated image generator -- a function that takes (x, y) and returns (r, g, b) threads -- if != 1, use multiprocessing to spawn this many processes ''' # Generate the data as a row-major list of (r, g, b) if threads == 1: data = [generator(x, y) for y in range(height) for x in range(width)] else: pool = multiprocessing.Pool(threads) data = pool.starmap(generator, [(x, y) for y in range(height) for x in range(width)]) # Pack that into a Pillow image and return it img = PIL.Image.new('RGB', (width, height)) img.putdata(data) return img By using multiprocessing rather than threading, we are actually spawning multiple Python processes, so we get a true multithreaded speedup. Since this program is almost entirely CPU bound, threading (with Python’s global interpreter lock) wouldn’t actually be any faster. An aside: Using starmap allows us to pass multiple parameters to the function we are mapping over. This was only introduced in Python 2.6 / 3.3, so make sure you have a sufficiently new version2. With that, we can make some pretty pictures like I’m sure I’ve shown off before3. generate_image( 400, 300, lambda x, y: ( (x * y) % 256, (x + y) % 256, max(x, y) % 256 ) ).save('sample.png') Yes, I realize that’s not the most Pythonic code in the world. And because the body of a Python lambda has to be an expression, you cannot write nearly as complicated functions as you could in Racket. It’s perfectly valid though. :) Okay, so we have a way to generate images, let’s use it to generate Mandelbrot sets. The basic idea of the Mandelbrot set is surprisingly simple4: Either does or does not escape to infinity. If the result remains bounded as n \to \infty , the number is part of the Mandelbrot set. If not, it’s not. Because Python has built in support for complex numbers, this code is fairly elegant: def make_mandelbrot_generator(width, height, center, size, max_iterations = 256): ''' A generator that makes generate_image compatible mandelbrot generators. width, height -- the size of the resulting image (used for scale) center -- the focus point of the image size -- the size of the larger dimension max_iterations -- the scale to check before exploding, used for coloring ''' # Scale the size so that is the size of the larger dimension if width >= height: size_x = size size_y = size * height / width else: size_x = size * width / height size_y = size # Convert to a bounding box min_x = center[0] - size_x / 2 max_x = center[0] + size_x / 2 min_y = center[1] - size_y / 2 max_y = center[1] + size_y / 2 def generator(x, y): # Scale to the mandlebrot frame; convert to a complex number x = (x / width) * (max_x - min_x) + min_x y = (y / height) * (max_y - min_y) + min_y c = x + y * 1j # Iterate until we escape to infinity or run out of iterations # For our purposes, we can consider infinity = 2 z = 0 for iteration in range(max_iterations): z = z * z + c # Size is r of polar coordinates (r, phi) = cmath.polar(z) if r > 2: break g = int(256 * iteration / max_iterations) return (g, g, g) return generator I’ve chosen here to make a function that returns the actual color generator primarily so that we would have access to the width and height within the main function. Amusingly, it’s been proven that if the magnitude of \mathbb{Z}_n crosses 2, it will go to infinity. Since r is the magnitude in the polar coordinate system (r,ϕ), we can use that as an escape hatch and even as a basic way to color the output. One side note: using the multiprocessing module, we have to be able to <a href="https://docs.python.org/2/library/pickle.html">pickle</a> any variables to the function called. Functions defined in the global scope can be pickled, but functions used directly as parameters to other functions cannot; don’t ask me why. So if threads is not 1, this does not work: generate_image( 400, 300, make_mandelbrot_generator(400, 300, (-0.5, 0), 3), threads = 4 ) But this does: generator = make_mandelbrot_generator(400, 300, (-0.5, 0), 3), generate_image(400, 300, generator, threads = 4) Weird. Anyways, what do we get when we try it out? Beautiful5! We need some color. Let’s introduce one more parameter to the make_mandelbrot_generator function: coloring. Basically, a function that takes in a number in the range [0, 1] (which we’re already computing; that is iteration / max_iterations) and return an RGB color. That way, we can have some more interesting colorations. For example, the grayscale coloring function from earlier: def grayscale(v): '''Simple grayscale value.''' g = int(256 * v) return (g, g, g) Or how about instead, we render something in blue and red. Start at black, then fade up the blue channel, crossfade to red in the next third, and fade back to black in the last: def hot_and_cold(v): '''Scale from black to blue to red and back to black.''' r = g = b = 0 if v < 1/3: v = 3 * v b = int(256 * v) elif v < 2/3: v = 3 * (v - 1/3) r = int(256 * v) b = int(256 * (1 - v)) else: v = 3 * (v - 2/3) r = int(256 * (1 - v)) return (r, g, b) Let’s render that one instead: generator = make_mandelbrot_generator(400, 300, (-0.5, 0), 3), generate_image(400, 300, generator, threads = 4, coloring = hot_and_cold) Excellent. We have a simple Mandelbrot generator. It’s not exactly what I set out to do for this post (really only the generate_image function is), but I think it’s pretty cool. As a bonus round, I made something of a basic testing framework: THREAD_COUNT = max(1, multiprocessing.cpu_count() - 1) SIZES = [ (400, 300), (1920, 1080) ] COLORINGS = [ ('grayscale', grayscale), ('hot-and-cold', hot_and_cold), ] IMAGES = [ ('default', (-0.5, 0), 3), # http://www.nahee.com/Derbyshire/manguide.html ('seahorse-valley', (-0.75, 0.1), 0.05), ('triple-spiral-valley', (0.088, 0.654), 0.25), ('quad-spiral-valley', (0.274, 0.482), 0.005), ('double-scepter-valley', (-0.1, 0.8383), 0.005), ('mini-mandelbrot', (-1.75, 0), 0.1), ] for width, height in SIZES: for image_name, center, size in IMAGES: for coloring_name, coloring in COLORINGS: filename = os.path.join('{width}x{height}', 'mandelbrot_{name}_{width}x{height}_{coloring}.png') filename = filename.format( name = image_name, width = width, height = height, coloring = coloring_name, ) generator = make_mandelbrot_generator(width, height, center, size, coloring = coloring) start = time.time() img = generate_image( width, height, generator, threads = THREAD_COUNT ) end = time.time() if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) img.save(filename) print('{} generated in {} seconds with {} threads'.format( filename, end - start, THREAD_COUNT )) multiprocessing.cpu_count() - 1 means that I leave one processor for other work (I was having issues with my computer freazing, multiprocessing is good at that). Other than that, generate a bunch of images and shove them into directories by size. Here are a few examples from nahee.com: Seahorse Valley Double Scepter Valley Triple Spiral Valley Quad Spiral Valley Mini Mandelbrot Or how about one nice large one (right click, save as): So much detail! Enjoy!
When exporting a large amount of frames is there a possibility to track the duration of this process? MishaHeesakkerslast edited by gferreira I'm currently setting up a Sublime Text 3 workflow and I was wondering if I could get print out some feedback of when the build is started and when it is ended? The code below gets called when the saveImage is done building. Is there a way to bind another callback function to saveImage? os.system("open --background -a Preview " + EXPORT_PATH) Thanks in advance! gferreiralast edited by this should give you the perceived execution time of script: import time start = time.time() # do something end = time.time() print(end - start) there are plenty of options to have some sort of process bar that gets updated for each frame: If this doesn't makes sense I can narrow it down to an simpler example! good luck! MishaHeesakkerslast edited by @frederik makes sense! I can easily print out the progress of executing frames but I can't figure out how I can get some kind of progress feedback when the saveImage() function is called when exporting a larger .mp4 file. Any ideas? oh, there is indeed no progress callback on saveImage(..) Doing some googling on 'progress' and 'ffmpeg' does not results in a clear solution. DrawBot has a small wrapper around ffmpeg to generate movies: see https://github.com/typemytype/drawbot/blob/master/drawBot/context/tools/mp4Tools.py
Cross-Encoder for Quora Duplicate Questions Detection Training Data Performance For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrained_cross-encoders.html]. Usage Pre-trained models can be used like this: from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')]) #e.g. scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), ('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')]) Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = torch.nn.functional.sigmoid(model(**features).logits) print(scores) Downloads last month 0
This documentation is not for the latest stable Salvus version. %matplotlib inline # This notebook will use this variable to determine which # remote site to run on. import os SALVUS_FLOW_SITE_NAME = os.environ.get("SITE_NAME", "local") PROJECT_DIR = "project" An accurate solution to the wave equation is a requirement for a wide variety of seismological research. In this tutorial, we will validate the accuracy of Salvus by comparing numerically calculated seismograms to semi-analytical solutions of Lamb's Problem in 2-D. In addition to giving us confidence in the synthetic data we will use in future tutorials, it also gives us a chance to gently learn some of the key features of the SalvusProject API. Lamb's problem is concerned with the behavior of the elastic wave equation in the presence of a half-space bounded by a free-surface condition. In our solution we expect both direct arrivals and those reflected from the free-surface, along with a contribution from the 2-D Rayleigh wave. To validate the solutions generated with Salvus, we will compare our results with semi-analytical ones computed using EX2DDIR. We'll consider a half-space bounded at , and excite waves using a Ricker source with a center frequency of 15 Hz. This setup keeps compute times very low, while also allowing for a fair amount of wavelengths to propagate within our domain. To get started, let's first import all the Python tools we'll need. import pathlib import numpy as np import salvus.namespace as sn Before we initialize our project, we'll first need to initialize the spatial domain to which our project corresponds. In this case we'll be using a simple 2-D box domain. Don't worry, we'll make things a bit more exciting in future tutorials. The box domain can easily be constructed from a set of two dimensional extents as shown in the cell below. d = sn.domain.dim2.BoxDomain(x0=0.0, x1=2000.0, y0=0.0, y1=1000.0) For a given project the domain we specify is immutable; its extents and characteristics are used to infer other information regarding our meshes, simulations, and data. With the simple domain definition given above, we're now ready to initialize our project. To do this we can use the Project.from_domain() constructor as show below. This function takes a path (which must not yet exist), and a domain object such as the one we just constructed. # Uncomment the following line to delete a # potentially existing project for a fresh start # !rm -rf proj if pathlib.Path(PROJECT_DIR).exists(): print("Opening existing project.") p = sn.Project(path=PROJECT_DIR) else: print("Creating new project.") p = sn.Project.from_domain(path=PROJECT_DIR, domain=d) Creating new project. If the cell executed without any problems, you should now see a folder in your current directory with the name proj. This is where all the relevant information relating to your project will be stored. Just so we can get a hang of the basic structure of a project, let's open up the folder in our file browser. Most operating systems will understand the command below, and will open the project folder in another window. Just uncomment the line for your operating system. # On Mac OSX # !open proj # On Linux: # !xdg-open proj With our domain initialized and our project created, we can now go right ahead and start preparing some scientific data. The first thing we'll do with the project is add observed data to it. In this case our observed data corresponds to a semi-analytic solution to Lamb's problem, as described in the introduction. These data are stored in an HDF5 file named reference_data.h5 in the current directory. Some data formats, such as ASDF or SEGY describe their data with associated headers. We'll see how to add these types of data in a later tutorial, but in this case we are just reading in raw waveform traces with little to no meta information. Because of this we'll need to assist Salvus a little and tell the project to what events this raw data refers to. This information is passed in the form of an EventCollection object which, at is most basic, is a data structure which relates lists of source definitions to lists of receiver definitions. These definitions can be in the form of pressure injections, force vectors, or GCMT moment tensors for sources, as well as pressure, velocity, or strain (etc.) sensors for receivers. In the coordinate system of the reference dataset which we'll add, we've placed a single vector source at the location . This source can be defined with the help of the simple_config helper as in the cell below. srcs = sn.simple_config.source.cartesian.VectorPoint2D( x=1000.0, y=500.0, fx=0.0, fy=-1.0 ) The data from this source was received at an array of 5 receivers at locations . For these and other simple arrays of receivers, the simple_config helper allows us to define the set in one go. recs = sn.simple_config.receiver.cartesian.collections.ArrayPoint2D( y=800.0, x=np.linspace(1010.0, 1410.0, 5), fields=["displacement"] ) With our sources and receivers now defined, we can add the combination of them both to our project as an EventCollection object. p += sn.EventCollection.from_sources(sources=[srcs], receivers=recs) Note here the syntax we used. An EventCollection, along with several other relevant objects, can be added to a project by simply using the += operator. Once the object is succesfully added to the project it is then "serialized", or saved, within the project directory structure. The power and usefulness of this concept will become apparent in a later tutorial -- for now all you need to know is that the event collection is now officially a part of our project! Now that we've defined a full "event", we can go ahead and add our "observed" data. We do this by explicitly associating the event with the appropriate data file. Since the event does not have a natural name, as it would in the case of an event gathered from the GCMT catalogue for example, the project has named it for us internally. Events are given numerical names of the form "event_xxxx", which correspond to the order in which they were added. Below we add the reference data to our project with the tag "reference", and associate it with the event we just created, or "event_0000". p.waveforms.add_external( data_name="reference", event="event_0000", data_filename="./reference_data.h5",) Now that the data is added, we can do a quick visualization of its contents. For 2-D box domains we can choose to plot individual events as either a shotgather, or a collection or wiggles, or both! Try experimenting with the list passed to .plot() below to see how the different options look. p.waveforms.get(data_name="EXTERNAL_DATA:reference", events=["event_0000"])[ 0 ].plot(component="X", receiver_field="displacement") All right, that's enough setup for now. Let's get going with some simulations of our own. The analytical solution was computed in an unbounded homogeneous isotropic elastic medium with material parameters specified in SI units as . If you recall from the presentation this morning, a complete model definition in Salvus is made up of a combination of a background model, and a (possibly empty) collection of volumetric models. As the analytic solution was computed in a homogeneous medium, we don't need to concern ourselves with (2- or 3-D) volumetric models for now. So, the next step is to define our background model using the salvus model interface. Since no volumetric models are required, we only need the background model to complete our final full model configuration. bm = sn.model.background.homogeneous.IsotropicElastic( rho=2200.0, vp=3000.0, vs=1847.5 ) mc = sn.ModelConfiguration(background_model=bm, volume_models=None) Note that up until now we have not specified any information regarding the frequency content of the data we are planning on simulating, and in fact all the parameters we've specified have been frequency independent. This is deliberate, as it is often the case that information on material parameters are provided independent of frequency. The next step is to add a time-frequency axis to our project, which enters in the form of an EventConfiguration. Here, at a bare minimum, we need to specify what type of source wavelet we would like to model, as well provide some basic information about the temporal extent of our upcoming simulations. The reference data were computed with using a Ricker wavelet with a center frequency of and, looking at the traces plotted above, we can see that the data runs for a bit more than 0.5 seconds. These parameters are now used to define our EventConfiguration object. ec = sn.EventConfiguration( wavelet=sn.simple_config.stf.Ricker(center_frequency=15.0), waveform_simulation_configuration=sn.WaveformSimulationConfiguration( end_time_in_seconds=0.6 ), ) To get a better sense of what our wavelet looks like in both the time and frequency domain, we can easily plot its characteristics in the cell below. ec.wavelet.plot() We quickly see that, while the center frequency of the wavelet was specified to be , there is actually a fair bit of energy that exists at frequencies higher than this. Its important to design our simulations so that they properly resolve all the frequencies we are interested in. The final step in defining a simulation is pulling together all the above into a single reproducible SimulationConfiguration. A SimulationConfiguration is a unique identifier that brings together the model, the source wavelet parameterization, and a proxy of the resolution of the simulation together. If you recall from the theoretical presentations earlier today, we are often satisfied with a simulation mesh comprised of 1 4th order spectral-element per simulated wavelength. The question then remains: given a broadband source wavelet, which frequency do we want to mesh for? The wavelet plot above gives us a clue: the vast majority of the energy in the current wavelet is contained at frequencies below . For our first attempt at matching the analytic solution then, we'll require that our mesh be generated using one element per wavelength at a frequency of . As you are probably becoming familiar with by now, we can add the relevant SimulationConfiguration to our project as below. p += sn.SimulationConfiguration( name="simulation_1", max_frequency_in_hertz=30.0, elements_per_wavelength=1.0, model_configuration=mc, event_configuration=ec, ) event = sn.EventCollection.from_sources(sources=[srcs], receivers=None) w = p.simulations.get_input_files("simulation_1", events=event) [2020-12-03 19:01:41,030] INFO: Creating mesh. Hang on. So far, regarding our simulation, we have: In fact, this is all we need to do! Before we actually run the simulation though, it can be helpful to get a visual overview of what is about to happen. Salvus project provides a small convenience function to visualize a SimulationConfiguration directly in the notebook, as below. This function takes a list of events as well, for the purpose of overplotting sources and receivers on the resultant domain. Let's have a look. w[0][0] <salvus.flow.simple_config.simulation.Waveform object at 0x7fdb23b9b590> p.viz.nb.simulation_setup( simulation_configuration="simulation_1", events=["event_0000"]) <salvus.flow.simple_config.simulation.Waveform object at 0x7fdb237539d0> Feel free to experiement with the dropdown menus and buttons. This visualization can really help debug obvious issues. At this point those of you familiar with older versions of Salvus might be wondering: where did the mesh from? In SalvusProject the complexity of mesh generation is moved into the background, and is handled internally via a reference to the SimulationConfiguration object. While the benefits of this approach are small for small domains and homogeneous models, they will become much later when we consider 3-D models and domains with topography. With everything ready to go, it's now time to run our first simulation! The launch_simulations command below takes a few arguments worth describing: site_name: This is an identifier which tells Flow whether you're running on your local machine, some remote cluster, or perhaps the old chess computer in your grandfather's basement. As long as Salvus has been set up correctly on the specified site all data transfers to / from the local or remote machine will happen automatically. Additionally, if a job management system is present on the remote site Flow will monitor the job queue.ranks_per_job: This is the number of MPI ranks the job will run on, and can range from 1 to whatever your license will allow.events: A list of events for which to run simulations for.simulation_configuration: The configuration for which to run simulations for. p.simulations.launch( ranks_per_job=2, site_name=SALVUS_FLOW_SITE_NAME, events=p.events.list(), simulation_configuration="simulation_1", ) [2020-12-03 19:01:42,272] INFO: Submitting job ... Uploading 1 files... 🚀 Submitted [email protected]local 1 And that's it! The simulations are off and running. SalvusFlow will take care of abstracting the machine archcitecture, and SalvusProject will take care of saving all the output data into the correct location, copying it from any remote machines as necessary. We can get the current status of the simulations by calling query_simulations() as below. p.simulations.query(block=True) True Since the simulations are so small, they should not take more than a few seconds to run regardless of the machine. once they are done, we can simply call the project.nb_compare() function to compare the computed data to a reference dataset of our choosing. p.viz.nb.waveforms( ["EXTERNAL_DATA:reference", "simulation_1"], receiver_field="displacement")
blob: e90ce6e8b07f1ccf10787f569929038a8129b6b7 ( plain ) #!/usr/bin/python # convert a jpilot with keyring plugin export to keepassx .xml format. # Copyright (C) 2012 Peter Palfrader # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. import csv import optparse import string import sys parser = optparse.OptionParser() parser.set_usage("%prog <IN> <OUT>") (options, args) = parser.parse_args() if len(args) > 2: parser.print_help() sys.exit(1) outf = open(args[1], "w") if len(args) >= 2 else sys.stdout inf = open(args[0], "r") if len(args) >= 1 else sys.stdin pws = {} c = csv.DictReader(inf, delimiter=',', quotechar='"') for row in c: cat = row['Category'] if cat not in pws: pws[cat] = [] pws[cat].append(row) dbheader = string.Template(""" <!DOCTYPE KEEPASSX_DATABASE> <database> """) dbfooter = string.Template(""" </database> """) groupheader = string.Template(""" <group> <title>$Category</title> <icon>1</icon> """) groupfooter = string.Template(""" </group> """) entry = string.Template(""" <entry> <title>$Name</title> <username>$Account</username> <password>$Password</password> <url></url> <comment>$Note</comment> <icon>1</icon> <creation></creation> <lastaccess></lastaccess> <lastmod></lastmod> <expire>Never</expire> </entry> """) groups = pws.keys() groups.sort() print >> outf, dbheader.substitute() for g in groups: print >> outf, groupheader.substitute({'Category': g}) for e in pws[g]: print >> outf, entry.substitute(e) print >> outf, groupfooter.substitute({'Category': g}) print >> outf, dbfooter.substitute() # vim:set et: # vim:set ts=4: # vim:set shiftwidth=4:
First, we need to create the button that will connect to the Pi. To do this, we'll solder some wires to a normally open (NO) push button, forming a "pigtail". This type of button is an open circuit until it is pressed. You can use any size button you wish, just make sure it's normally open and is a push (not toggle) button. If you have a breadboard, you can use it to prototype and test your button and scripts. If not, no worries; this is a super simple circuit. Cut the ends off of two jumper wires and solder one of each to each button terminal. Then, use heat-shrink tubing to secure the soldered connections. Now we'll need to create a few scripts: The listen script, which will watch for button presses and call the reset script if the button is pressed, and The reset script, whichessentiallyruns the following commands to reset RetroArch config and reboot the Pi: rm /opt/retropie/configs/all/emulationstation/es_input.cfg rm /opt/retropie/configs/all/retroarch-joypads/* sudo reboot I'll walk you through the process of creating these scripts. SSH into your Pi and create the reset script: sudo nano listen-for-reset.py Then, paste the following into that file: #!/usr/bin/env python import RPi.GPIO as GPIO import subprocess GPIO.setmode(GPIO.BCM) GPIO.setup(3, GPIO.IN, pull_up_down=GPIO.PUD_UP) GPIO.wait_for_edge(3, GPIO.FALLING) subprocess.call(['rm', '/opt/retropie/configs/all/emulationstation/es_input.cfg'], shell=False) subprocess.call(['rm', '/opt/retropie/configs/all/retroarch-joypads/*'], shell=False) subprocess.call(['shutdown', '-r', 'now'], shell=False) Save and exit. Finally, make the script executable and set it to start on boot: sudo mv listen-for-reset.py /usr/local/bin/ sudo chmod +x /usr/local/bin/listen-for-reset.py Now we'll create the listen script that will start/stop our service and watch for button presses. To create the script: sudo nano listen-for-reset.sh Enter the following: #! /bin/sh ### BEGIN INIT INFO # Provides: listen-for-reset.py # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 ### END INIT INFO # If you want a command to always run, put it here # Carry out specific functions when asked to by the system case "$1" in start) echo "Starting listen-for-reset.py" /usr/local/bin/listen-for-reset.py & ;; stop) echo "Stopping listen-for-reset.py" pkill -f /usr/local/bin/listen-for-reset.py ;; *) echo "Usage: /etc/init.d/listen-for-reset.sh {start|stop}" exit 1 ;; esac exit 0 Save and exit. Then, move the file into /etc/init.d and make it executable: sudo mv listen-for-reset.sh /etc/init.d/ sudo chmod +x /etc/init.d/listen-for-reset.sh Finally, tell the script to run on boot: sudo update-rc.d listen-for-reset.sh defaults Now, you can reboot your Pi to start the script or start it manually: sudo /etc/init.d/listen-for-reset.sh start To test your setup, configure a controller and/or change its configuration from within a ROM. Press the button and after your Pi reboots you will be prompted to configure the connected controller from scratch. Now your friends (or kids) can mess around with the controller settings all they want and, poof, reset button. Questions? Comments? Post in the comments section below and I'll do my best to help you out!
I'm pretty new to python and I have some trouble getting this snippet of code to work. I have some lines that work to copy shapefiles and feature classes, but I haven't managed to be able to copy feature classes from a feature dataset. A "lookup table" exists with info on the source path, source name, target path, target name, etc. There's a field called 'BatchID' that is used as a reference for what the user wants to be copied. That said, there's a raw input that is in the code and once the user enters the number, the data in that row(s) is copied. I keep getting the error: ERROR 000732: Input Features: Dataset C:...file path here...\test1.gdb\SanTest does not exist or is not supported Failed to execute (CopyFeatures). Copy Feature dataset- features if batch_id == int(btch_num): ds = arcpy.ListFeatureClasses('','', source_name) for fc in ds: print ('These features') + (fc) + (' are in the feature dataset!') arcpy.CopyFeatures_management(fc, os.path.join(target_path, os.path.splitext(fc)[0]))
TensorFlow 1 version View source on GitHub Stop training when a monitored metric has stopped improving. Inherits From: Callback tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False ) Used in the notebooks Used in the guide Used in the tutorials Assuming the goal of a training is to minimize the loss. With this, themetric to be monitored would be 'loss', and mode would be 'min'. Amodel.fit() training loop will check at end of every epoch whetherthe loss is no longer decreasing, considering the min_delta andpatience if applicable. Once it's found no longer decreasing,model.stop_training is marked True and the training terminates. The quantity to be monitored needs to be available in logs dict.To make it so, pass the loss or metrics at model.compile(). Args monitor Quantity to be monitored. min_delta Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. patience Number of epochs with no improvement after which training will be stopped. verbose verbosity mode. mode One of {"auto", "min", "max"}. In min mode,training will stop when the quantitymonitored has stopped decreasing; in "max"mode it will stop when the quantitymonitored has stopped increasing; in "auto"mode, the direction is automatically inferredfrom the name of the monitored quantity. baseline Baseline value for the monitored quantity. Training will stop if the model doesn't show improvement over the baseline. restore_best_weights Whether to restore model weights fromthe epoch with the best value of the monitored quantity.If False, the model weights obtained at the last step oftraining are used. An epoch will be restored regardlessof the performance relative to the baseline. If no epochimproves on baseline, training will run for patienceepochs and restore weights from the best epoch in that set. Example: callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) # This callback will stop the training when there is no improvement in # the validation loss for three consecutive epochs. model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) model.compile(tf.keras.optimizers.SGD(), loss='mse') history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10, batch_size=1, callbacks=[callback], verbose=0) len(history.history['loss']) # Only 4 epochs are run. 4 callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) # This callback will stop the training when there is no improvement in # the loss for three consecutive epochs. model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) model.compile(tf.keras.optimizers.SGD(), loss='mse') history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10, batch_size=1, callbacks=[callback], verbose=0) len(history.history['loss']) # Only 4 epochs are run. 4 Methods get_monitor_value get_monitor_value( logs) set_model set_model( model) set_params set_params( params)
Python standard output file is not available when running Python macros from input(), print(), repr() and str() are available from the Python shell. ... menu. Presenting the output of a module requires the Python interactive console. Features such as The Alternative Python Script Organizer (APSO) extension offers a msgbox() function out of its apso_utils module. LibreOffice Basic proposes InputBox(), Msgbox() and Print() screen I/O functions. Python alternatives exist relying either on LibreOffice API Abstract Windowing Toolkit, either on Python to Basic function calls. The latter proposes a syntax that is intentionally close to that of Basic, and uses a Python module next to a Basic module. The API Scripting Framework is used to perform Basic, BeanShell, JavaScript and Python inter-languages function calls. Python syntax: MsgBox(txt, buttons=0, title=None) InputBox(txt, title=None, default=None) Print(txt) Examples: >>> import screen_io as ui >>> reply = ui.InputBox('Please enter a phrase', title='Dear user', defaultValue="here..") >>> rc = ui.MsgBox(reply, title="Confirmation of phrase") >>> age = ui.InputBox('How old are you?', title="Hi") >>> ui.Print(age) Installation: Copy screen_io Python module in My macros within <UserProfile>/Scripts/python/pythonpath, Copy uiScripts Basic module in My macros Standard Basic library, Restart LibreOffice. screen_io Python module # -*- coding: utf-8 -*- from __future__ import unicode_literals def MsgBox(prompt: str, buttons=0, title='LibreOffice') -> int: """ Displays a dialog box containing a message and returns a value.""" xScript = _getScript("_MsgBox") res = xScript.invoke((prompt,buttons,title), (), ()) return res[0] def InputBox(prompt: str, title='LibreOffice', defaultValue='') -> str: """ Displays a prompt in a dialog box at which the user can enter text.""" xScript = _getScript("_InputBox") res = xScript.invoke((prompt,title,defaultValue), (), ()) return res[0] def Print(message: str): """Outputs the specified strings or numeric expressions in a dialog box.""" xScript = _getScript("_Print") xScript.invoke((message,), (), ()) import uno from com.sun.star.script.provider import XScript def _getScript(script: str, library='Standard', module='uiScripts') -> XScript: sm = uno.getComponentContext().ServiceManager mspf = sm.createInstanceWithContext("com.sun.star.script.provider.MasterScriptProviderFactory", uno.getComponentContext()) scriptPro = mspf.createScriptProvider("") scriptName = "vnd.sun.star.script:"+library+"."+module+"."+script+"?language=Basic&location=application" xScript = scriptPro.getScript(scriptName) return xScript uiScripts Basic module Option Explicit Private Function _MsgBox( prompt As String, Optional buttons As Integer, _ Optional title As String ) As Integer _MsgBox = MsgBox( prompt, buttons, title ) End Function Private Function _InputBox( prompt As String, Optional title As String, _ Optional default As String) As String _InputBox = InputBox( prompt, title, default ) End Function Private Sub _Print( msg As String ) Print msg End Sub
Module time trong Python Hướng dẫn tiếp theo trong chuỗi bài học về xử lý Date/Time trong Python, Quantrimang.com sẽ cùng bạn tìm hiểu chi tiết về module time cùng các hàm liên quan đến thời gian được xác định trong module này. Cùng theo dõi nhé! Python có module time dùng để xử lý các tác vụ liên quan đến thời gian. Để sử dụng các hàm được xác định trong module, trước tiên hãy import module này vào, thực hiện như sau: import time Dưới đây là các hàm liên quan đến thời gian thường được sử dụng nhiều nhất. Các hàm được dùng nhiều nhất time.time() Hàm time() trả về số giây tính từ epoch, hay còn gọi là giá trị timestamp Đối với hệ thống Unix, 00:00:00 ngày 1/1/1970 theo giờ UTC được gọi là epoch (thời điểm bắt đầu thời gian). import time seconds = time.time() print("So giay tinh tu epoch:", seconds) Kết quả sẽ có dạng: So giay tinh tu epoch: 1562922590.7720907 time.ctime() Phương thức này chuyển đổi một time được biểu diễn bằng số giây tính từ epoch thành một biểu diễn ở dạng chuỗi. import time # số giây tính từ epoch # viet boi Quantrimang.com seconds = 1562983783.9618232 local_time = time.ctime(seconds) print("Local time:", local_time) Chạy chương trình, kết quả trả về ngày giờ tương ứng với số giây truyền vào: Local time: Sat Jul 13 09:09:43 2019 Nếu không truyền seconds, chương trình trả về giá trị thời gian hiện tại. time.sleep() Hàm sleep() dừng thực thi luồng hiện tại trong số giây truyền vào. import time print ("Start :", time.ctime()) time.sleep(3) print ("End :", time.ctime()) Phương thức này không trả về bất cứ giá trị nào mà chỉ delay trình thực thi. Thử chạy chương trình để thấy rõ khoảng thời gian delay. Start : Sat Jul 13 09:33:52 2019 End : Sat Jul 13 09:33:57 2019 Trước khi đi tiếp các hàm liên quan đến thời gian khác, hãy tìm hiểu qua class time.struct_time. Class time.struct_time Một số hàm trong module time, chẳng hạn như gmtime(), asctime()... đều có time.struct_time là đối tượng được trả về. Ví dụ về kết quả time.struct_time. time.struct_time(tm_year=2018, tm_mon=12, tm_mday=27, tm_hour=6, tm_min=35, tm_sec=17, tm_wday=3, tm_yday=361, tm_isdst=0) Chỉ số Thuộc tính Mô tả 0 tm_year Năm hiện tại: 0000, ...., 2018, ..., 9999 1 tm_mon Tháng hiện tại: 1, 2, ..., 12 2 tm_mday Ngày hiện tại: 1, 2, ..., 31 3 tm_hour Giờ hiện tại: 0, 1, ..., 23 4 tm_min Phút hiện tại: 0, 1, ..., 59 5 tm_sec Giây hiện tại: 0, 1, ..., 61 6 tm_wday Ngày trong tuần: 0, 1, ..., 6; Monday tính là 0 7 tm_yday Ngày trong năm: 1, 2, ..., 366 8 tm_isdst Xác định DST: 0, 1 hoặc -1 time.localtime() Hàm localtime() trong module time lấy số giây được truyền vào làm đối số và trả về struct_timetheo giờ địa phương. import time result = time.localtime(1562983783) print("Ket qua:", result) print("\nNam:", result.tm_year) print("Gio:", result.tm_hour) Chạy chương trình, kết quả trả về là: Ket qua: time.struct_time(tm_year=2019, tm_mon=7, tm_mday=13, tm_hour=9, tm_min=9, tm_sec=43, tm_wday=5, tm_yday=194, tm_isdst=0) Nam: 2019 Gio: 9 Nếu không cung cấp số giây hoặc truyền giá trị None thì thời gian từ hiện tại trả về từ hàm time() sẽ được sử dụng. time.gmtime() Hàm gmtime() trong module time lấy số giây được truyền vào làm đối số và trả về struct_timetheo giờ UTC. import time result = time.gmtime(1562983783) print("Ket qua:", result) print("\nNam:", result.tm_year) print("Gio:", result.tm_hour) Chạy chương trình, kết quả trả về là: Ket qua: time.struct_time(tm_year=2019, tm_mon=7, tm_mday=13, tm_hour=2, m_min=9, tm_sec=43, tm_wday=5, tm_yday=194, tm_isdst=0) Nam: 2019 Gio: 2 Nếu không cung cấp số giây hoặc truyền giá trị None thì thời gian từ hiện tại trả về từ hàm time() sẽ được sử dụng. time.mktime() Hàm mktime() trong module time lấy struct_time(hoặc một tuple chứa 9 phần tử tương ứng với struct_time)làm đối số và trả về số giây tính từ epoch theo giờ địa phương. Đây là hàm nghịch đảo của localtime(). import time t = (2019, 7, 13, 9, 9, 43, 5, 194, 0) local_time = time.mktime(t) print("Gio dia phuong:", local_time) Chạy chương trình, kết quả trả về là: Gio dia phuong: 1562983783.0 Ví dụ dưới đây cho thấy mktime() và localtime() có liên quan như thế nào. import time seconds = 1562983783 # trả về struct_time # viet boi Quantrimang.com t = time.localtime(seconds) print("t1: ", t) # trả về giây từ struct_time s = time.mktime(t) print("\ns:", seconds) Kết quả: t1: time.struct_time(tm_year=2019, tm_mon=7, tm_mday=13, tm_hour=9, tm_min=9, tm_sec=43, tm_wday=5, tm_yday=194, tm_isdst=0) s: 1562983783 time.asctime() Hàm asctime() trong module time lấy struct_time(hoặc một tuple chứa 9 phần tử tương ứng với struct_time)làm đối số và trả về một chuỗi đại diện cho thời gian đó. import time t = (2019, 7, 13, 9, 9, 43, 5, 194, 0) result = time.asctime(t) print("Ket qua:", result) Kết quả chương trình trả về: Ket qua: Sat Jul 13 09:09:43 2019 time.strftime() Hàm strftime() trong module time lấy struct_time(hoặc một tuple tương ứng với struct_time)làm đối số và trả về một chuỗi đại diện cho thời gian đó dựa trên code định dạng được truyền vào. import time named_tuple = time.localtime() # lấy struct_time time_string = time.strftime("%m/%d/%Y, %H:%M:%S", named_tuple) print(time_string) Chạy chương trình, kết quả trả về là: 07/15/2019, 08:46:58 Ở ví dụ này, %Y, %m, %d, %H là các code định dạng. %Y: năm [0001,..., 2018, 2019,..., 9999] %m: tháng [01, 02, ..., 11, 12] %d: ngày [01, 02, ..., 30, 31] %H: giờ [00, 01, ..., 22, 23 %M: tháng [00, 01, ..., 58, 59] %S: giây [00, 01, ..., 58, 61] time.strptime() Hàm strptime() trong module time phân tích một chuỗi đại diện cho một thời điểm, thời gian và trả về struct_time. import time time_string = "17 July, 2019" result = time.strptime(time_string, "%d %B, %Y") print(result) Kết quả trả về có dạng: time.struct_time(tm_year=2019, tm_mon=7, tm_mday=17, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=2, tm_yday=198, tm_isdst=-1)
Recently I've been trying to learn how to control stepper motors with Python. I can run one 28BYJ-48-5V at a time easily using PiStep2 Quad using scripts which are readily available (see the list on bottom), however I've been trying to modify the code to run two of these motors simultaneously. Since I'm still quite new to programming with python, I've not been very successful at figuring out how exactly a script running two or more stepper motors simultaneously would be built properly. So, I'd need some advice here. This is the code I've modified: Code: Select all #!/usr/bin/python # Import required libraries import sys import time import RPi.GPIO as GPIO # Use BCM GPIO references # instead of physical pin numbers GPIO.setmode(GPIO.BCM) # Define GPIO signals to use # Physical pins 11,15,16,18 # GPIO17,GPIO22,GPIO23,GPIO24 StepPins = [17,22,23,24] StepPins2 = [20,26,16,19] # Set all pins as output for pin in StepPins + StepPins2: print "Setup pins" GPIO.setup(pin,GPIO.OUT) GPIO.output(pin, False) # Define advanced sequence # as shown in manufacturers datasheet Seq = [[1,0,0,1], [1,0,0,0], [1,1,0,0], [0,1,0,0], [0,1,1,0], [0,0,1,0], [0,0,1,1], [0,0,0,1]] StepCount = len(Seq) StepDir = 1 # Set to 1 or 2 for clockwise # Set to -1 or -2 for anti-clockwise # Read wait time from command line if len(sys.argv)>1: WaitTime = int(sys.argv[1])/float(1000) else: WaitTime = 1/float(1000) # Initialise variables StepCounter = 0 # Start main loop while True: #print (StepCounter,) # print (Seq[StepCounter]) for pin in range(0, 8): xpin = StepPins[pin] if Seq[StepCounter][pin]!=0: #print (" Enable GPIO %i" %(xpin)) GPIO.output(xpin, True) else: GPIO.output(xpin, False) StepCounter += StepDir # If we reach the end of the sequence # start again if (StepCounter>=StepCount): StepCounter = 0 if (StepCounter<0): StepCounter = StepCount+StepDir # Wait before moving on time.sleep(WaitTime) I have also tried altering the StepPins part like this: Code: Select all StepPins = [[17,22,23,24], [20,26,16,19]] # Set all pins as output for pin in StepPins: print "Setup pins" GPIO.setup(pin,GPIO.OUT) GPIO.output(pin, False) But every alteration this far results in: Traceback (most recent call last): File *path here* in <module> xpin = StepPins[pin] IndexError: list out of range. So I returned to the first alteration and started wondering how should I add StepPins2 into " xpin = StepPins[pin]" because separating them with , or + will not work. Here is the resources list I've been using so far: http://4tronix.co.uk/pistep/stepper.py http://4tronix.co.uk/pistep/stepctrl.py https://www.raspberrypi-spy.co.uk/2012/ ... in-python/ I've been looking at projects and scripts of other hobbyists too, but as a beginner I don't yet want to confuse myself with too far-fetched examples. Unless modifying the existing codes is an attempt doomed to fail. Could this still be made to work?
# -*- coding: utf-8 -*- ## Copyright 2019 Trevor van Hoof and Jan Pijpers. ## Licensed under the Apache License, Version 2.0 ## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers ## See the license file attached or on https://www.janpijpers.com/script-licenses/ ''' Name: findSignalsAndSlotsQt This script gives you all available signals and slots on a qt widget object. Normally you can just check the documentation, however if custom signals and slots are used its hard to find them. We do this by using the meta class from the object. I used this to find the timechanged event on the maya timeControl widget. ''' import sys from PySide.QtGui import * ## pip install PySide from PySide import QtCore def get_widget(name): ''' Kind of slow method of finding a widget by object name. :param name: :return: ''' for widget in QApplication.allWidgets(): try: if name in widget.objectName(): return widget except Exception as e: print e pass return None def test( *arg, **kwarg): ''' Simple test function to see what the signal sends out. :param arg: :param kwarg: :return: ''' print "The args are: ", arg print "The kwargs are: ", kwarg print if __name__ == "__main__": ## Here we make a simple QLineEdit for argument sake ... app = QApplication(sys.argv) wid = QLineEdit() wid.setObjectName("myLineEdit") wid.show() ## Find the widget by name. ## See the qt ui list hierarchy script to find all widgets in a qt ui. widgetObjectName = "myLineEdit" widgetObject = get_widget(widgetObjectName) if not widgetObject: raise Exception("Could not find widget: %s" %widgetObjectName) ## Sanity check if not wid == widgetObject: raise Exception("Should not happen.XD") ## Get the meta data from this object meta = widgetObject.metaObject() ## Itterate over the number of methods available for methodNr in xrange(meta.methodCount()): method = meta.method(methodNr) ## If the method is a signal type if method.methodType() == QtCore.QMetaMethod.MethodType.Signal: ## Print the info. print print "This is the signal name", method.signature() print "These are the signal arguments: ", method.parameterNames() ## If the method is a signal type if method.methodType() == QtCore.QMetaMethod.MethodType.Slot: ## Print the info. print print "This is the slot name", method.signature() print "These are the slot arguments: ", method.parameterNames() ''' output example: ... This is the signal name textChanged(QString) These are the signal arguments: [PySide.QtCore.QByteArray('')] This is the signal name textEdited(QString) These are the signal arguments: [PySide.QtCore.QByteArray('')] ... so now you can do widgetObject.textChanged.connect(test) any every time the text changes the 'test' function will be called ''' widgetObject.textChanged.connect(test) sys.exit(app.exec_())
@Botenga delete this code you have at the end of your html: <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> <link rel="stylesheet" type="text/css" href="bootstrap.css"> </body> and it should work now https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1 without the jquery.min.js part botenga sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles: html-joe sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles: var img = document.createElement('img') img.src = stringified.weather[0].icon hoxtygen sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles: document.getElementById('image-container').innerHTML = "<img src = "+stringified.weather[0].icon+">"; hoxtygen sends brownie points to @sorinr and @mot01 :sparkles: :thumbsup: :sparkles: await try{this.getStreamData()}.catch(error){console.log(error)}; didn't work out when I made getStreamData async. Here's my pen: primuscovenant sends brownie points to @heroiczero :sparkles: :thumbsup: :sparkles: catherinewoodward sends brownie points to @terensu-desu :sparkles: :thumbsup: :sparkles: <html>, <body> sections in them - that is provided by the template. animate.css you can paste it into the resource boxes directly, or they have "quick adds" and a way to search for the package that you want. CodePen is a nice useful site - just remember to stick with "Pen" items for your pages, as a free user (unless you've paid) you only have one "Project". I don't think that there is a limit to the number of "Pen" items? I have seen people get confused by the fact that they can only have one "project"... maybe that will be helpful to be aware of that. @terensu-desu Sure! <html> <head> <script type="text/javascript" src="https://safi.me.uk/typewriterjs/js/typewriter.js"></script> <script> var app = document.getElementById('app'); var typewriter = new Typewriter(app, { loop: true }); typewriter.typeString('Hello World!') .pauseFor(2500) .deleteAll() .typeString('Strings can be removed') .pauseFor(2500) .deleteChars(7) .typeString('altered!') .start(); </script> </head> <body> <div id="app"></div> </body> </html> This is my code currently. Nothing shows when I run it. Just a blank page! indikoro sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles: <script> element to the end just before the </body> closing tag. That will insure that the page is loaded before it tries to run the JS. $(document).wait() hi can someone tell me how to fix this issue i have setup a fixed navbar , the issue is the banner goes below the navbar how to get the banner to showup after the navbar? sorry reycuban, you can't send brownie points to yourself! :sparkles: :sparkles: reycuban sends brownie points to @tiagocorreiaalmeida :sparkles: :thumbsup: :sparkles: its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info' robomongo is not supported for my system so i cant able to seet the data stored or not! my system is 32bit os! its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info' robomongo is not supported for my system so i cant able to seet the data stored or not! my system is 32bit os!this is the problem. const express = require('express'); const router = express.Router(); const cricketModel = require('../model/score'); router.get('/api/maxi',function(req,res){ res.send({"type" : "get"}); }); router.post('/api/maxi/',function(req,res){ cricketModel.create(req.body).then(function(data){ res.send(data); console.log(data); }).catch(err => console.error(err) && res.status(400).send(err)); }); router.delete('/api/maxi/:id',function(req,res){ res.send({"type" : "delete"}); }); router.put('/api/maxi/:id',function(req,res){ res.send({"type" : "update"}); }); module.exports = router; const express = require('express'); const router = require('./api/router.js'); const bodyParser = require('body-parser'); const mongoose = require('mongoose'); const app = express(); mongoose.connect("mongodb://localhost/gomaxi"); mongoose.Promise = global.Promise; app.use(express.static('public')); app.use(bodyParser.json()); app.use(router); app.listen(4000,function(){ console.log("server is listening for the request on port 4000 , hurray !"); }); its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why? note : robomongo is not supported for my system so i cant able to seet the data stored or not! my system is 32bit os! data back router.get('/api/maxi',function(req,res){ console.log('1'); res.send({"type" : "get"}); }); router.post('/api/maxi/',function(req,res){ console.log('2') cricketModel.create(req.body).then(function(data){ res.send(data); console.log(data); }).catch(err => console.error(err) && res.status(400).send(err)); }); router.delete('/api/maxi/:id',function(req,res){ res.send({"type" : "delete"}); }); router.put('/api/maxi/:id',function(req,res){ res.send({"type" : "update"}); }); <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>maxi</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> </head> <body> <input id="search1" placeholder="enter playername"> <input id="search2" placeholder="enter playerscore"> <button class="btn-primary">click</button> <div class="well"></div> </body> <script> $(document).ready(function(){ $(".btn-primary").click(function(){ console.log("click"); var obj = { "player" : $("#search1").val(), "score" : $("#search2").val() }; $.ajax({ type : "POST", url : "http://localhost:4000/api/maxi/", contentType : "application/json", data : JSON.stringify(obj), success : function(data){ console.log(data); $(".well").append("<h1>"+data.player + data.score+"</h1>"); }, error : function(err){ console.log('error' ,err); }, dataType : "json" }); }); }); </script> </html> ```router.post('/', function (req, res, next) { var user = new User({ firstName: req.body.firstName, lastName: req.body.lastName, password: bcrypt.hashSync(req.body.password, 10), email: req.body.email }); user.save(function(err, result) { if (err) { // If there is an error, return from this function immediately with // the error code return res.status(500).json({ title: 'An error occurred', error: err }); } res.status(201).json({ message: 'Saved User', obj: result }); }); });``` const express = require('express'); const router = express.Router(); const cricketModel = require('../model/score'); router.get('/api/maxi',function(req,res){ res.send({"type" : "get"}); }); router.post('/api/maxi/',function(req,res){ console.log("2"); cricketModel(req.body).save().then(function(data){ res.send(data); console.log(data); }).catch(err => console.error(err) && res.status(400).send(err)); }); router.delete('/api/maxi/:id',function(req,res){ res.send({"type" : "delete"}); }); router.put('/api/maxi/:id',function(req,res){ res.send({"type" : "update"}); }); module.exports = router; @1532j0004kg how about ```router.post('/api/maxi/', function (req, res, next) { console.log('2'); console.log(body); cricketModel.save(function (err, result) { if (err) { // If there is an error, return from this function immediately with // the error code return res.status(500).json({ title: 'An error occurred', error: err }); } res.status(201).json({ message: 'Saved User', obj: result }); }); ``` ```router.post('/api/maxi/', function (req, res, next) { console.log('2'); console.log(body); cricketModel.save(function (err, result) { if (err) { // If there is an error, return from this function immediately with // the error code return res.status(500).json({ title: 'An error occurred', error: err }); } res.status(201).json({ message: 'Saved User', obj: result }); }); ``` ```router.post('/api/maxi/', function (req, res, next) { console.log('2'); console.log(body); cricketModel.save(function (err, result) { if (err) { // If there is an error, return from this function immediately with // the error code return res.status(500).json({ title: 'An error occurred', error: err }); } res.status(201).json({ message: 'Saved User', obj: result }); }); ``` router.post('/api/maxi/', function (req, res, next) { console.log('2'); console.log(body); cricketModel.save(function (err, result) { if (err) { // If there is an error, return from this function immediately with // the error code return res.status(500).json({ title: 'An error occurred', error: err }); } res.status(201).json({ message: 'Saved User', obj: result }); }); Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561 5fc1c5ffe"), __v: 0 }) { __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe } 2 Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35 615fc1c5fff"), __v: 0 }) { __v: 0, player: 'q1w2', score: 1, _id: 5a47bd6c90f35615fc1c5fff } 2 Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f 35615fc1c6000"), __v: 0 }) { __v: 0, player: 'q1w2as', score: 1, _id: 5a47bd7390f35615fc1c6000 } ```router.post('/api/maxi/', function (req, res, next) { console.log('2'); console.log(body); var cricketModel = new CricketModel({ firstField: req.body.firstField, // Your model fields here lastField: req.body.lastField, }); cricketModel.save(function (err, result) { if (err) { // If there is an error, return from this function immediately with // the error code return res.status(500).json({ title: 'An error occurred', error: err }); } res.status(201).json({ message: 'Saved User', obj: result }); }); });``` ```router.post('/api/maxi/', function (req, res, next) { console.log('2'); console.log(body); var cricketModel = new CricketModel({ firstField: req.body.firstField, // Your model fields here lastField: req.body.lastField, }); cricketModel.save(function (err, result) { if (err) { // If there is an error, return from this function immediately with // the error code return res.status(500).json({ title: 'An error occurred', error: err }); } res.status(201).json({ message: 'Saved User', obj: result }); }); });``` Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561 5fc1c5ffe"), __v: 0 }) { __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe } 2 Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35 615fc1c5fff"), __v: 0 }) { __v: 0, player: 'q1w2', score: 1, _id: 5a47bd6c90f35615fc1c5fff } 2 Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f 35615fc1c6000"), __v: 0 }) { __v: 0, player: 'q1w2as', score: 1, _id: 5a47bd7390f35615fc1c6000 } C:\Users\dinesh\Desktop\app1>scores.find(); 'scores.find' is not recognized as an internal or external command, operable program or batch file. C:\Users\dinesh\Desktop\app1>mongo.exe'mongo.exe' is not recognized as an internal or external command,operable program or batch file.C:\Users\dinesh\Desktop\app1>start mongo.exeThe system cannot find the file mongo.exe. C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin> > scores.find(); 2017-12-30T08:49:19.995-0800 E QUERY [thread1] ReferenceError: scores is not defined : @(shell):1:1 C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo 2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update is not installed, will zero-out data files MongoDB shell version: 3.2.18-4-g752daa3 connecting to: test Server has startup warnings: 2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] 2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit MongoDB binary is deprecated 2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten] 2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten] 2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi t MongoDB binary. 2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a re limited to less than 2GB of data (or less with --journal). 2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ aling defaults to off for 32 bit and is currently off. 2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch ub.mongodb.org/core/32bit 2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten] > > use database switched to db database > scores.find() 2017-12-30T08:52:26.512-0800 E QUERY [thread1] ReferenceError: scores is not defined : @(shell):1:1 > collections.find() 2017-12-30T08:52:36.159-0800 E QUERY [thread1] ReferenceError: collections is not defined : @(shell):1:1 C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo 2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update is not installed, will zero-out data files MongoDB shell version: 3.2.18-4-g752daa3 connecting to: test Server has startup warnings: 2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] 2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit MongoDB binary is deprecated 2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten] 2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten] 2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi t MongoDB binary. 2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a re limited to less than 2GB of data (or less with --journal). 2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ aling defaults to off for 32 bit and is currently off. 2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch ub.mongodb.org/core/32bit 2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten] > C:\mongodbs C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mongodbs C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mo ngodbs 2017-12-30T08:59:19.588-0800 I CONTROL [main] 2017-12-30T08:59:19.592-0800 W CONTROL [main] 32-bit servers don't have journal ing enabled by default. Please use --journal if you want durability. 2017-12-30T08:59:19.593-0800 I CONTROL [main] 2017-12-30T08:59:19.602-0800 I CONTROL [main] Hotfix KB2731284 or later update is not installed, will zero-out data files 2017-12-30T08:59:19.611-0800 I CONTROL [initandlisten] MongoDB starting : pid=3 544 port=27017 dbpath=C:\mongodbs 32-bit host=dinesh007 2017-12-30T08:59:19.614-0800 I CONTROL [initandlisten] targetMinOS: Windows Vis ta/Windows Server 2008 2017-12-30T08:59:19.615-0800 I CONTROL [initandlisten] db version v3.2.18-4-g75 2daa3 2017-12-30T08:59:19.617-0800 I CONTROL [initandlisten] git version: 752daa30609 5fb1610bb5db13b7b106ac87ec6cb 2017-12-30T08:59:19.618-0800 I CONTROL [initandlisten] allocator: tcmalloc 2017-12-30T08:59:19.619-0800 I CONTROL [initandlisten] modules: none 2017-12-30T08:59:19.622-0800 I CONTROL [initandlisten] build environment: 2017-12-30T08:59:19.623-0800 I CONTROL [initandlisten] distarch: i386 2017-12-30T08:59:19.624-0800 I CONTROL [initandlisten] target_arch: i386 2017-12-30T08:59:19.625-0800 I CONTROL [initandlisten] options: { storage: { db Path: "C:\mongodbs" } } 2017-12-30T08:59:19.632-0800 E NETWORK [initandlisten] listen(): bind() failed errno:10048 Only one usage of each socket address (protocol/network address/port ) is normally permitted. for socket: 0.0.0.0:27017 2017-12-30T08:59:19.633-0800 E STORAGE [initandlisten] Failed to set up sockets during startup. 2017-12-30T08:59:19.635-0800 I CONTROL [initandlisten] dbexit: rc: 48 omgmerrickd sends brownie points to @vasejs and @import :sparkles: :thumbsup: :sparkles: function palindrome(str) {var x = str.split('').reverse().join('');var y = x.replace(/[\W_]/g, '');var palindr = y.toLowerCase();if ( palindr == str){return true;}else {return false;} } palindrome("eye"); sorry vasejs, you can't send brownie points to yourself! :sparkles: :sparkles: ``` function palindrome(str) { var x = str.split('').reverse().join(''); var y = x.replace(/[\W_]/g, ''); var palindr = y.toLowerCase(); if ( palindr == str){ return true; } else { return false; } } palindrome("eye"); ``` sakisbal sends brownie points to @vasejs :sparkles: :thumbsup: :sparkles: function palindrome(str) { var x = str.split('').reverse().join(''); var y = x.replace(/[\W_]/g, ''); var palindr = y.toLowerCase(); if ( palindr == str){ return true; } else { return false; } } palindrome("eye"); return str.replace(/[\W_]/g, '').toLowerCase()=== str.replace(/[\W_]/g, '').toLowerCase().split('').reverse().join('');
File: gtkdoc-depscan.in package info (click to toggle) gtk-doc 1.15-2 links: PTS area: main in suites: squeeze size: 5,576 kB ctags: 266 sloc: xml: 16,117; sh: 10,619; perl: 7,967; ansic: 509; makefile: 434; lisp: 137 file content (372 lines) | stat: -rw-r--r-- 11,984 bytes parent folder | download 1 #!@PYTHON@ import gzip, os.path, re from os import environ, popen, walk from optparse import OptionParser from sys import stderr from xml.sax import ContentHandler, make_parser from xml.sax.handler import feature_external_ges default_books = ['atk', 'gdk', 'gdk-pixbuf', 'glib', 'gio', 'gobject', 'gtk', 'pango'] __comment_regex = re.compile(r'/\*.*?\*/', re.DOTALL) __word_regex = re.compile(r'\b[A-Za-z_][A-Za-z0-9_]*\b') class Book(object): def __init__(self, name, folders, version=None): self.__catalog = None self.__name = name self.__symbols = None self.__timestamp = 0 self.__title = None self.__version = version for f in folders: catalogs = map( lambda n: os.path.join(f, name, n % name), ['%s.devhelp2', '%s.devhelp2.gz']) catalogs = map( lambda n: (os.path.getmtime(n), n), filter(os.path.isfile, catalogs)) catalogs.sort() if catalogs: self.__catalog = catalogs[-1][1] break if not self.__catalog: raise IOError, 'No devhelp book found for "%s"' % name def __cmp__(self, other): if isinstance(other, Book): return cmp(self.name, other.name) return 0 def __repr__(self): return '<Book name="%s">' % self.__name def parse(self): timestamp = os.path.getmtime(self.__catalog) if not self.__symbols or timestamp > self.__timestamp: class DevhelpContentHandler(ContentHandler): def __init__(self, book, symbols): self.__book = book self.__symbols = symbols def startElement(self, name, attrs): if 'book' == name: self.title = attrs.get('title') return if 'keyword' == name: symbol = Symbol.from_xml(self.__book, attrs) if symbol: self.__symbols[symbol.name] = symbol return self.__symbols, self.__timestamp = dict(), timestamp handler = DevhelpContentHandler(self, self.__symbols) parser = make_parser() parser.setFeature(feature_external_ges, False) parser.setContentHandler(handler) if self.__catalog.endswith('.gz'): parser.parse(gzip.open(self.__catalog)) else: parser.parse(open(self.__catalog)) self.__title = handler.title def _get_symbols(self): self.parse(); return self.__symbols def _get_title(self): self.parse(); return self.__title def find_requirements(self): requirements = dict() for symbol in self.symbols.values(): if not symbol.matches: continue if symbol.since and symbol.since > self.version: symbol_list = requirements.get(symbol.since, []) requirements[symbol.since] = symbol_list symbol_list.append(symbol) return requirements catalog = property(lambda self: self.__catalog) name = property(lambda self: self.__name) version = property(lambda self: self.__version) symbols = property(_get_symbols) title = property(_get_title) class Symbol(object): known_attributes = ('name', 'type', 'link', 'deprecated', 'since') class DeprecationInfo(object): def __init__(self, text): if text.count(':'): pair = text.split(':', 1) self.__version = Symbol.VersionInfo(pair[0]) self.__details = pair[1].strip() else: self.__version = None self.__details = text.strip() def __cmp__(self, other): if isinstance(other, Symbol.DeprecationInfo): return cmp(self.version, other.version) if isinstance(other, Symbol.VersionInfo): return cmp(self.version, other) return 1 def __str__(self): if not self.__version: return self.__details and str(self.__details) or 'Deprecated' if self.__details: return 'Since %s: %s' % (self.__version, self.__details) return 'Since %s' % self.__version details = property(lambda self: self.__details) version = property(lambda self: self.__version) class VersionInfo(object): def __init__(self, text): match = re.match(r'^\w*\s*((?:\d+\.)*\d+)', text) self.__numbers = map(int, match.group(1).split('.')) self.__hash = reduce(lambda x, y: x * 1000 + y, reversed(self.__numbers)) self.__text = text.strip() def __get_number(self, index): if len(self.__numbers) > index: return self.__numbers[index] return 0 def __cmp__(self, other): if isinstance(other, Symbol.VersionInfo): return cmp(self.numbers, other.numbers) return 1 def __hash__(self): return self.__hash def __repr__(self): return '.'.join(map(str, self.__numbers)) major = property(lambda self: self.__get_number(0)) minor = property(lambda self: self.__get_number(1)) patch = property(lambda self: self.__get_number(2)) numbers = property(lambda self: self.__numbers) text = property(lambda self: self.__text) @classmethod def from_xml(cls, book, attrs): name, type, link, deprecated, since = map(attrs.get, Symbol.known_attributes) name = name.strip() if name.endswith('()'): if not type in ('function', 'macro'): type = (name[0].islower() and 'function' or 'macro') name = name[:-2].strip() words = name.split(' ') if len(words) > 1: if words[0] in ('enum', 'struct', 'union'): if not type: type = words[0] name = name[len(words[0]):].strip() elif 'property' == words[-1]: assert('The' == words[0]) owner = link.split('#', 1)[1].split('-', 1)[0] type, name = 'property', '%s::%s' % (owner, name.split('"')[1]) elif 'signal' == words[-1]: assert('The' == words[0]) owner = link.split('#', 1)[1].split('-', 1)[0] type, name = 'signal', '%s:%s' % (owner, name.split('"')[1]) if not type: return None if None != deprecated: deprecated = Symbol.DeprecationInfo(deprecated) if since: since = Symbol.VersionInfo(since) if name.count(' '): print >>stderr, ( 'WARNING: Malformed symbol name: "%s" (type=%s) in %s.' % ( name, type, book.name)) return Symbol(book, name, type, link, deprecated, since) def __init__(self, book, name, type, link=None, deprecated=None, since=None): self.__book = book self.__name = name self.__type = type self.__link = link self.__deprecated = deprecated self.__since = since self.__matches = [] def __repr__(self): return ( '<Symbol: %s, type=%s, since=%s, deprecated=%s>' % ( self.name, self.type, self.since, self.deprecated)) book = property(lambda self: self.__book) name = property(lambda self: self.__name) type = property(lambda self: self.__type) link = property(lambda self: self.__link) deprecated = property(lambda self: self.__deprecated) matches = property(lambda self: self.__matches) since = property(lambda self: self.__since) def parse_cmdline(): options = OptionParser(version="@VERSION@") options.add_option('-b', '--book', dest='books', help='name of a devhelp book to consider', default=[], action='append') options.add_option('-d', '--html-dir', metavar='PATH', dest='dirs', help='path of additional folders with devhelp books', default=[], action='append') options.add_option('-u', '--list-unknown', action='store_true', default=False, help='list symbols not found in any book', dest='unknown') options.add_option('-v', '--verbose', action='store_true', default=False, help='print additional information') return options.parse_args() def merge_gnome_path(options): path = environ.get('GNOME2_PATH') path = path and path.split(':') or [] prefix = popen( 'pkg-config --variable=prefix glib-2.0' ).readline().rstrip() path.insert(0, prefix) path = filter(None, [p.strip() for p in path]) path = [[ os.path.join(p, 'share', 'devhelp', 'books'), os.path.join(p, 'share', 'gtk-doc', 'html')] for p in path] path = reduce(list.__add__, path) path = filter(os.path.isdir, path) options.dirs += path if '__main__' == __name__: options, args = parse_cmdline() merge_gnome_path(options) if not options.books: options.books = default_books def trace(message, *args): if options.verbose: print message % args def parse_book(name): try: match = re.match(r'^(.*?)(?::(\d+(?:\.\d+)*))?$', name) name, version = match.groups() trace('reading book: %s', name) version = version and Symbol.VersionInfo(version) return name, Book(name, options.dirs, version) except IOError, e: print >>stderr, 'WARNING: %s.' % e def scan_source_file(name): contents = None try: contents = __comment_regex.sub('', file(name).read()) except IOError, e: print >>stderr, e if contents: trace('scanning: %s', name) lines = contents.split('\n') for lineno in range(len(lines)): for word in __word_regex.findall(lines[lineno]): symbol = symbols.get(word) if symbol: symbol.matches.append((name, lineno, symbol)) elif options.unknown and word.find('_') > 0: unknown_symbols.append((name, lineno, word)) unknown_symbols = [] matches, symbols = dict(), dict() books = dict(filter(None, map(parse_book, set(options.books)))) for book in books.values(): symbols.update(book.symbols) for name in args: if os.path.isdir(name): for path, dirs, files in walk(name): for f in files: if f.endswith('.c'): scan_source_file(os.path.join(path, f)) else: scan_source_file(name) matches = [] for book in books.values(): requirements = book.find_requirements().items() requirements.sort() if requirements: for symbol in requirements[-1][1]: matches += symbol.matches if options.unknown: matches += unknown_symbols matches.sort() for filename, lineno, symbol in matches: if isinstance(symbol, Symbol): args = filename, lineno, symbol.book.name, symbol.since, symbol.name print '%s:%d: %s-%s required for %s' % args elif options.verbose: print '%s:%d: unknown symbol %s' % (filename, lineno, symbol) if options.unknown: unknown = [m[2].split('_')[0].lower() for m in unknown_symbols] unknown = list(set(unknown)) unknown.sort() print 'unknown prefixes: %s' % ', '.join(unknown) raise SystemExit(matches and 1 or 0)
If you’ve spent any time looking at online NLP resources, you’ve probably run into spelling correctors. Writing a simple but reasonably accurate and powerful spelling corrector can be done with very few lines of code. I found this sample program by Peter Norvig (first written in 2006) that does it in about 30 lines. As an exercise, I decided to port it over to Estonian. If you want to do something similar, here’s what you’ll need to do. First: You need some text! Norvig’s program begins by processing a text file—specifically, it extracts tokens based on a very simple regular expression. import re from collections import Counter def words(text): return re.findall(r'\w+', text.lower()) WORDS = Counter(words(open('big.txt').read())) The program builds its dictionary of known “words” by parsing a text file—big.txt—and counting all the “words” it finds in the text file, where “word” for the program means any continuous string of one or more letters, digits, and the underscore _ (r'\w+'). The idea is that the program can provide spelling corrections if it is exposed to a large number of correct spellings of a variety of words. Norvig’s ran his original program on just over 1 million words, which resulted in a dictionary of about 30,000 unique words. To build your own text file, the easiest route is to use existing corpora, if available. For Estonian, there are many freely available corpora. In fact, Sven Laur and colleagues built clear workflows for downloading and processing these corpora in Python (estnltk). I decided to use the Estonian Reference Corpus. I excluded the chatrooms part of the corpus (because it was full of spelling errors), but I still ended up with just north of 3.5 million unique words in a corpus of over 200 million total words. Measuring string similarity through edit distance Norvig takes care to explain how the program works both mechanically (i.e., the code) and theoretically (i.e., probability theory). I want to highlight one piece of that: edit distance. Edit distance is a means to measure similarity between two strings based on how many changes (e.g., deletions, additions, transpositions, …) must be made to string1 in order to yield string2. The spelling corrector utilizes edit distance to find suitable corrections in the following way. Given a test string, … If the string matches a word the program knows, then the string is a correctly spelled word. If there are no exact matches, generate all strings that are one change awayfrom the test string. If any of them are words the program knows, choose the one with the greatest frequency in the overall corpus. If there are no exact matchesormatches at an edit distance of 1, check all strings that aretwo changes awayfrom the test string. If any of them are words the program knows, choose the one with the greatest frequency in the overall corpus. If there are still no matches, return the test string—there is nothing similar in the corpus, so the program can’t figure it out. The point in the program that generates all the strings that are one change away is given below. This is the next place where you’ll need to edit the code to adapt it for another language! def edits1(word): # "All edits that are one edit away from `word`." letters = 'abcdefghijklmnopqrstuvwxyz' splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [L + R[1:] for L, R in splits if R] transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1] replaces = [L + c + R[1:] for L, R in splits if R for c in letters] inserts = [L + c + R for L, R in splits for c in letters] return set(deletes + transposes + replaces + inserts) Without getting into the technical details of the implementation, the code takes an input string and returns a set containing all strings that differ from the input in only one way: with a deletion, transposition, replacement, or insertion. So, if our input was ‘paer’, edits1 would return a set including (among other thing) par, paper, pare, and pier. The code I’ve represented above will need to be edited to be used with many non-English languages. Can you see why? The program relies on a list of letters in order to create replaces and inserts. Of course, Estonian does not have the same alphabet as English! So for Estonian, you have to change the line that sets the value for letters to match the Estonian alphabet (adding ä, ö, õ, ü, š, ž; subtracting c, q, w, x, y): letters = 'aäbdefghijklmnoöõprsštuüvzž' Once you make that change, it should be up and running! Before wrapping up this post, I want to discuss one key difference between English and Estonian that can lead to some different results. A difference between English and Estonian: morphology! In Norvig’s original implementation for English, a corpus of 1,115,504 words yielded 32,192 unique words. I chopped my corpus down to the same length, and I found a much larger number of unique words: 170,420! What’s going on here? Does Estonian just have a much richer vocabulary than English? I’d say that’s unlikely; rather, this has to do with what the program treats as a word. As far as the program is concerned, be, am, is, are, were, was, being, been are all different words, because they’re different sequences of characters. When the program counts unique words, it will count each form of be as a unique word. There is a long-standing joke in linguistics that we can’t define what a word is, but many speakers have the intuition is and am are not “different words”: they’re different forms of the same word. The problem is compounded in Estonian, which has very rich morphology. The verb be in English has 8 different forms, which is high for English. Most verbs in English have just 4 or 5. In Estonian, most verbs have over 30 forms. In fact, it’s similar for nouns, which all have 12-14 “unique” forms (times two if they can be pluralized). Because this simple spelling corrector defines word as roughly “a unique string of letters with spaces on either side”, it will treat all forms of olema ‘be’ as different words. Why might this matter? Well, this program uses probability to recommend the most likely correction for any misspelled words: choose the word (i) with the fewest changes that (ii) is most common in the corpus. Because of how the program defines “word”, the resulting probabilities are not about words on a higher level, they’re about strings, e.g., How frequent is the string ‘is’ in the corpus? As a result, it’s possible that a misspelling of a common word could get beaten by a less common word (if, for example, it’s a particularly rare form of the common word). This problem could be avoided by calculating probabilities on a version of the corpus that has been stemmed, but in truth, the real answer is probably to just build a more sophisticated spelling corrector! Spelling correction: mostly an English problem anyway Ultimately, designing spelling correction systems based on English might lead them to have an English bias, i.e., to not necessarily work as effectively on other languages. But that’s probably fine, because spelling is primarily an English problem anyway. When something is this easy to put together, you may want to do it just for fun, and you’ll get to practice some things—in this case, building a data set—along the way.
Cette option n'est a priori pas dispo sur iPhone ? Ainsi que la citation d'un membre ? Veuillez m'excuser pour ce désagrément [code]-bash-3.2# diskutil list /dev/disk0 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage Macintosh HD 499.2 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 /dev/disk1 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme +2.1 GB disk1 1: Apple_HFS OS X Base System 2.0 GB disk1s1 /dev/disk2 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +5.2 MB disk2 /dev/disk3 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk3 /dev/disk4 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk4 /dev/disk5 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk5 /dev/disk6 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk6 /dev/disk7 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk7 /dev/disk8 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk8 /dev/disk9 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +12.6 MB disk9 /dev/disk10 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk10 /dev/disk11 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +1.0 MB disk11 /dev/disk12 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk12 /dev/disk13 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk13 /dev/disk14 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk14 /dev/disk15 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +1.0 MB disk15 /dev/disk16 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +6.3 MB disk16 /dev/disk17 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +6.3 MB disk17 /dev/disk18 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk18 /dev/disk19 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk19 Offline Logical Volume OS X Base System on disk0s2 5F1D5C19-77F4-464A-A430-25E9CB933D17 Locked Encrypted -bash-3.2# Logical Volume OS X Base System on disk0s2 1: Apple_HFS OS X Base System 2.0 GB disk1s1 diskutil list -bash-3.2# diskutil list /dev/disk0 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage Macintosh HD 499.2 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 /dev/disk1 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme +2.1 GB disk1 1: Apple_HFS OS X Base System 2.0 GB disk1s1 /dev/disk2 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +5.2 MB disk2 /dev/disk3 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk3 /dev/disk4 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk4 /dev/disk5 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk5 /dev/disk6 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk6 /dev/disk7 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk7 /dev/disk8 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk8 /dev/disk9 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +12.6 MB disk9 /dev/disk10 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk10 /dev/disk11 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +1.0 MB disk11 /dev/disk12 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk12 /dev/disk13 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk13 /dev/disk14 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk14 /dev/disk15 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +1.0 MB disk15 /dev/disk16 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +6.3 MB disk16 /dev/disk17 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +6.3 MB disk17 /dev/disk18 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +524.3 KB disk18 /dev/disk19 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: untitled +2.1 MB disk19 /dev/disk20 (internal, virtual): #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS OS X Base System +498.9 GB disk20 Logical Volume on disk0s2 5F1D5C19-77F4-464A-A430-25E9CB933D17 Unlocked Encrypted diskutil eraseVolume jhfs+ "Macintosh HD" disk20 diskutil cs list -bash-3.2# diskutil eraseVolume jhfs+ "Macintosh HD" disk20 Started erase on disk20 OS X Base System Unmounting disk Erasing Initialized /dev/rdisk20 as a 465 GB case-insensitive HFS Plus volume with a 40960k journal Mounting disk Finished erase on disk20 Macintosh HD -bash-3.2# diskutil cs list CoreStorage logical volume groups (1 found) | +-- Logical Volume Group 1F7A63DF-46A9-4120-9279-0691B1DDEC48 ========================================================= Name: Macintosh HD Status: Online Size: 499248103424 B (499.2 GB) Free Space: 8486912 B (8.5 MB) | +-< Physical Volume F1CB743E-C0E1-43BD-A398-5C7BF1C2C4B0 | ---------------------------------------------------- | Index: 0 | Disk: disk0s2 | Status: Online | Size: 499248103424 B (499.2 GB) | +-> Logical Volume Family 962F0044-E90C-4115-B3AC-DCA649BF3837 ---------------------------------------------------------- Encryption Type: AES-XTS Encryption Status: Unlocked Conversion Status: Complete High Level Queries: Fully Secure | Passphrase Required | Accepts New Users | Has Visible Users | Has Volume Key | +-> Logical Volume 5F1D5C19-77F4-464A-A430-25E9CB933D17 --------------------------------------------------- Disk: disk20 Status: Online Size (Total): 498887294976 B (498.9 GB) Revertible: Yes (unlock and decryption required) LV Name: Macintosh HD Volume Name: Macintosh HD Content Hint: Apple_HFS -bash-3.2# Oui c'est ça, c'est comme si j'avais un ordi neuf. J'ai du créer un nouveau compte.BonsoirCharou Quand tu dis --> est-ce que tu veux dire qu'un technicien a ré-installé un OS et que tu peux de nouveau ouvrir une session ? - session d'un compte vide des anciens documents ? Mais un redémarrage de plus à résolu le problème !
I have two irregular shaped polygons that overlap each other at least in at two points. I want to first find the intersection area between all of them, then for the polygon pair that intersects I want to find the centerline of each intersection, which should not be a straight line. I thought of using the Centerline module in python that uses Voronoi daigram, but it doesn't seem to work well. Is there another library that can quickly calculate the centerline of an intersection that starts at one intersection point and ends at another? I have the following code: import geopandas as gdp from shapely.geometry import Polygon from centerline.geometry import Centerline crs = {'init': 'epsg:4326'} attributes = {"id": 1, "name": "polygon", "valid": True} # find the intersection of polygons intersection_list = [] for x in range (0, len(list_of_polygons)): if len(list_of_polygons) > 1: # check if the last polygon in the list intersect with the current polygon if list_of_polygons[-1].intersects(list_of_polygons[x]): polygon_last = gdp.GeoDataFrame(index=[0], crs=crs, geometry=[list_of_polygons[-1]]) polygon_current = gdp.GeoDataFrame(index=[0], crs=crs, geometry=[list_of_polygons[x]]) # get the intersection geometry intersection = gdp.overlay(polygon_last, polygon_current, how = "intersection") intersection_geom = Polygon(intersection.geometry) # find the centreline of the intersction centreline = Centerline(intersection_geom, **attributes)
Problem intro As a data scientist (or analyst), we spend a significant chunk of time to gather & clean data. Sometimes as we are doing feature engineering, we build functions and iterate the functions based on the objective. After a while, you deploy your models along with the feature engineering functions into production, (or data analysis for a dashboard output) and your stakeholder / product manager spots a mistake: How do you identify the failure point as soon as possible? Is it the code, or the data being sent to you in production? This is where testing becomes important from a data scientist point of view! In addition, it also helps to: provide context + documentation safeguards against yourself when making changes or pre-deployment! Pre-Req Good to have Docker Knowledge of remote development with vscode Makefile Quick Setup In a rush? All the (completed) examples are available in github. Git clone the repo with: git clone https://github.com/Freedom89/pytest-tutorial.git Refer to the README for setup. There are 3 options: Local setup with terminal. Accessing the Docker bash. Inside vscode remote development. For the purpose of this repo, it is recommended to use either vscode remote development terminal, or your normal terminal accessing the docker bash entry point. The docker guide might be useful in understanding the README. Actually.. Most data scientist are already doing testing when cleaning data / building features! Let's consider one of a data scientist most popular tool, pandas. You would attempt some aggregations, and run some sample data and check the values import pandas as pd df_dummy = pd.DataFrame(dict(id=[1, 1, 2, 2, 3, 3, 3], values=[3, 5, 6, 7, 8, 9, 15])) df_stats = ( df_dummy.groupby(["id"]) .agg( count=pd.NamedAgg(column="values", aggfunc="count"), sum=pd.NamedAgg(column="values", aggfunc="sum"), max=pd.NamedAgg(column="values", aggfunc="max"), ) .reset_index() .assign(pct_value=lambda df: round(100 * df["sum"] / sum(df["sum"]), 2)) ) """ df_stats id count sum max pct_value 0 1 2 8 5 15.09 1 2 2 13 7 24.53 2 3 3 32 15 60.38 """ # To double check - you might sample a column or specific rows df_temp = df_dummy.loc[lambda x: x["id"] == 1][["values"]] df_temp.sum().values #8 df_temp.max().values #5 Now lets visit a simpler example for now! Introduction Suppose you have implemented a function, say computing the number of combinations: ^nC_r = \frac{n!}{r! \times (n-r)!} You would break down the function into a few parts into units (In reality you might do it in one pass, but let's go along with it) implementing the factorial function implementing the multiplication function implementing the division function Assert Before we proceed, we need to learn the assert statement: assert <statement>, <reason if failure>. x: int = 100 y: int = 200 assert x == y, "values are not the same!" Output: assert x ==y, "values are not the same!" --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) /workspaces/pytest-tutorial/src/simple_math.py in ----> 8 assert x ==y, "values are not the same!" AssertionError: values are not the same! If the assert statement is correct, e.g assert x == y-100 then no error message will occur. Example Now, lets start with a Hello World example! Assuming you are using anaconda distribution with mac/linux/docker etc, Define a python script such as simple_math.py with pytest (via pip) installed as follow: def factorial(x: int) -> int: if x == 0: return 1 else: return x * factorial(x - 1) def test_factorial(): assert factorial(3) == 6, "response is incorrect" In the same directory where simple_math.py is: pytest simple_math.py ======================= test session starts ======================== platform linux -- Python 3.7.6, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 rootdir: /workspaces/pytest-tutorial/src plugins: mock-3.3.0 collected 1 item simple_math.py . [100%] ======================== 1 passed in 0.03s ========================= If you encounter ModuleNotFoundError: No module named 'src' and you are running in your local environment, you can: Understand why and refer to the references section Try running python -m pytest Or make localtest if you understand Makefile. Naming convention Quoted from the docs, Test method names or function names should start with “test_”, as in test_example. Methods with names that don’t match this pattern won’t be executed as tests. You will notice that some scripts beginning with eg_ will not run unless specifically invoked! Folder structure There are also certain recommended ways to structure your test layout. I personally follow this layout which is the first structure suggested in the docs above: . ├── Dockerfile ├── README.md ├── requirements.txt ├── setup.py ├── src │   ├── __init__.py │   └── simple_math.py └── tests └── test_simple_math.py Best practices / Extra readings in references Use-Case We now look at common use cases that a data scientist/analyst will encounter: Regex Perhaps as a data scientist working in a e-commerce platform and launching a marketing campaign, you want to detect emails that are associated to each other. One way this could be done is by string similarity. Your users would start creating emails such as: string12352@gmail.com string23522@gmail.com Or by using multiple free email providers: string@gmail.com string@outlook.com string@yahoo.com In src/regex.py: In tests/regex.py: To run: pytest tests/regex Output: ====================== test session starts ====================== platform linux -- Python 3.7.6, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 rootdir: /workspaces/pytest-tutorial plugins: cov-2.10.1, mock-3.3.0 collected 3 items tests/regex/test_regex.py ... [100%] ======================= 3 passed in 0.06s ======================= There are some problems with this testing in terms of best coding practices, such as: Multiple namings trying to figure out different function paramters test_rm1, test_rm2 - The naming convention cannot be the usual test_function format, Multiple asserts doing the same thing within test_rm2 but will only show 1 pass. multiple copy/pasting, yikes! There is a better way to do this with parametrize which will be re-visited later. Decision Tree Data scientist/analyst sometimes implements rule based engine / or performs feature engineering! When creating a function or the rule engine, a data scientist would enter some mock values to test that his function is working as expected! Aside: Another purpose of this is to demonstrate with pydantic! graph TD;A --true--> B1A --false--> B2B2 --false_return-->C21B2 --true_return--> C22B1 --false--> C11B1 --true_return--> C12C11 --false_return--> D11C11 --true_return--> D12A[x>5]B1[x * y>10]B2[category in A, B]C11[y/z < 100]C21((value0))C12((value4))C22((value1))D11((value2))D12((value3)) In src/dtree/dtree.py: In src/dtree/types.py: In tests/dtree/test_dtree.py: Similarly, to test for the other values / sample inputs, you can make use of the pytest parametrize. To run: pytest tests/dtree/ Pandas In the earlier pandas example, this is what you could have done: df_check = pd.DataFrame( { "id": {0: 1, 1: 2, 2: 3}, "count": {0: 2, 1: 2, 2: 3}, "sum": {0: 8, 1: 13, 2: 32}, "max": {0: 5, 1: 7, 2: 15}, "pct_value": {0: 15.09, 1: 24.53, 2: 60.38}, } ) pd.testing.assert_frame_equal(df_stats, df_check) Now, a question you may start asking: What if i have a mock dataframe i wish to reuse for multiple tests? This is where fixtures will be useful! Pytest Libraries This section talks more about the other features of pytest which will solve some of the pain points above. They are mainly: Fixtures In a data scientist context, a fixture is essentially an object you can access in the test. import pytest @pytest.fixture def put_whatever_name_you_wish(): return "anyvalue" def test_value(put_whatever_name_you_wish): assert put_whatever_name_you_wish == "anyvalue", "something went wrong" Now, lets take a look at the pandas example. In src/pd_df.py: In tests/pd_df.py: To know more, you can find the docs here. Parametrize In the earlier regex example, you might have a few emails to test. Similarly in the decisiontrees you would need to provide sample values to verify that each branch is working as expected. Think of parametrize as different values you can input to get different desired outputs. The syntax may seems weird at first: import pytest @pytest.mark.parametrize( "input,another_input,output", [((1, 1), 2, 4), ((2, 4), 4, 10), ((4, 10), 100, 114)], ) def test_addition(input, another_input, output): assert sum(input) + another_input == output, "something went wrong" Essentially, you envision what variables you need in the functions, e.g A,B,C and you concat them in a string "A,B,C" separated by commas. After which, you define a list of tuples, with each element in the tuple representing the value of each variable. In the earlier regex example in tests/regex/test_regex.py, it would be simplified to the following: @pytest.mark.parametrize( "input_email,lb,ub,output_email", [ ("a1234@gmail.com", 1, None, "a@gmail.com"), ("a1234@gmail.com", 5, None, "a1234@gmail.com"), ("a1234@gmail.com", 1, 3, "a1@gmail.com"), ], ) def test_rm_trailing_numbers(input_email, lb, ub, output_email): assert ( regex.rm_trailing_numbers(input_email, lb, ub) == output_email ), "something went wrong" Note, to use fixtures with parametrize would require pytest-cases which is not covered here! But just so you are aware! Mocking Mocking is generally used in two cases (in my experience): The first case is when the value or feature is time dependent or is random in nature. This is assuming that setting a CONSTANT value or Set Seed or a fixture is not possible. The second case is when: a function or process takes too long to return, such as a complicated function, or making a call to an external system and you would like to by-pass it so that your tests are independent of the external system. (you could use docker-compose, but that is a separate discussion altogether) More suggested readings available at references, do check them out! The full docs for pytest-mock can be found here. The most 2 common mocks i use are: mocker.patch mocker.patch.object To demonstrate, in src/mock.py: In tests/eg_mock.py: Pytest commands The full pytest commands in terminal can be found by pytest -h or pytest -help. These will get you started: command example description pytest as is run all tests, by default look for tests directory pytest <dir> pytest tests execute all tests in directory pytest <dir>/<script> pytest tests.eg_mock.py execute specific script pytest <dir>/<script>::<func> pytest tests/regex/test_regex.py::test_rm2 execute specific function within script pytest --collect-only as is shows all tests that will be executed pytest -k <string> pytest -k "rm_trailing" execute tests with matching string pytest -k <string not string> pytest -k rm and not numbers execute tests with string excluding not string pytest -x as is stop after first failure pytest -v as is verbose Skipping tests There are cases where you would want Deliberately fail a test To show that how the function should not be used or is expected to fail based on certain inputs to skip a test, Generally I use this when I have no idea how to test something but i tried my best, in that case I leave it as it is to show what I have attempted. (Hopefully someone or the future me will figure it out!) or skip a test under certain conditions, Such as specific Operating Systems! Refer to the docs for more on the various types of skipping!. The below 3 examples illustrates the above! Xfail @pytest.mark.xfail(strict=True) def test_function(): assert 1==2, "something went wrong" skip @pytest.mark.skip(reason="no way of currently testing this") def test_the_unknown(): ... skipif import sys @pytest.mark.skipif(sys.version_info < (3, 7), reason="requires python3.7 or higher") def test_function(): ... Pytest Cov Helps to check the coverage to know your testing percentage! Docs here pytest --cov ----------- coverage: platform linux, python 3.7.6-final-0 ----------- Name Stmts Miss Cover ----------------------------------------------- src/__init__.py 0 0 100% src/dtree/__init__.py 0 0 100% src/dtree/dtree.py 17 6 65% src/dtree/types.py 22 0 100% src/pd_df.py 6 0 100% src/regex/__init__.py 0 0 100% src/regex/regex.py 12 0 100% src/simple_math.py 4 0 100% tests/dtree/test_dtree.py 9 0 100% tests/regex/test_regex.py 11 0 100% tests/test_pd_df.py 13 0 100% tests/test_simple_math.py 5 1 80% ----------------------------------------------- TOTAL 99 7 93% References Understanding pytest path Good practices More examples on Mocking Additional guides
问题 今天使用横向布局row 并且在里面添加了一个图标和一行文字,但是总是会超出布局,即使设置了overflow再调试时也会出现如下的情况 Row( crossAxisAlignment: CrossAxisAlignment.center, children: <Widget>[ Image.asset( "images/list_icon.png", width: 10, height: 10, fit: BoxFit.fill, ), Text( " "+hotService[index]['servicename'], maxLines: 1, style: TextStyle( fontSize: 14, decoration: TextDecoration.none), overflow: TextOverflow.ellipsis, ) ], ), 解决方法 直接在Text文字外面包一层Expanded(之前初学时,直接将Row改成Flex,因为Row继承自Flex,所以不用替换外层感谢周南城指出来) Row( crossAxisAlignment: CrossAxisAlignment.center, children: <Widget>[ Image.asset( "images/list_icon.png", width: 10, height: 10, fit: BoxFit.fill, ), Expanded(child: Text( " "+hotService[index]['servicename'], maxLines: 1, style: TextStyle( fontSize: 14, decoration: TextDecoration.none), overflow: TextOverflow.ellipsis, )) ], ), 解决后如图所示 出现问题的原因 因为刚开始学习,不是很了解,应该是横向row没有确定宽度,text根据内容来撑开row,所以就会超出,换成flex 使text最大宽度能占用剩下的所有宽度,所以达到最宽的时候就会显示省略号。 这个错误就好像再column中使用listView一样,会出现一个在无限高度的view中使用listView的错误,这个错误代码如下 Column( children: <Widget>[ ListView.builder( itemBuilder: (BuildContext con, int index) { return Text("index"); }, itemCount: 10, //下面的注释打开,可以解除报错 //shrinkWrap: true, ) ], ), //会出现的错误 I/flutter (18787): ══╡ EXCEPTION CAUGHT BY RENDERING LIBRARY ╞═════════════════════════════════════════════════════════ I/flutter (18787): The following assertion was thrown during performResize(): I/flutter (18787): Vertical viewport was given unbounded height. I/flutter (18787): Viewports expand in the scrolling direction to fill their container.In this case, a vertical I/flutter (18787): viewport was given an unlimited amount of vertical space in which to expand. This situation I/flutter (18787): typically happens when a scrollable widget is nested inside another scrollable widget. I/flutter (18787): If this widget is always nested in a scrollable widget there is no need to use a viewport because I/flutter (18787): there will always be enough vertical space for the children. In this case, consider using a Column I/flutter (18787): instead. Otherwise, consider using the "shrinkWrap" property (or a ShrinkWrappingViewport) to size I/flutter (18787): the height of the viewport to the sum of the heights of its children. 解决column 嵌套listview的方法就是设置shrinkWrap, shrinkWrap:该属性表示是否根据子widget的总长度来设置ListView的长度,默认值为false 。默认情况下,ListView的会在滚动方向尽可能多的占用空间。当ListView在一个无边界(滚动方向上)的容器中时,shrinkWrap必须为true
immunarch is an R package designed to analyse T-cell receptor (TCR) and B-cell receptor (BCR) repertoires, aimed at medical scientists and bioinformaticians. The mission of immunarch is to make immune sequencing data analysis as effortless as possible and help you focus on research instead of coding. Follow us on Twitter for news and updates. In order to install immunarch execute the following command: install.packages("immunarch") That’s it, you can start using immunarch now! See the Quick Start section below to dive into immune repertoire data analysis. If you run in any trouble with installation, take a look at the Installation Troubleshooting section. Note: there are quite a lot of dependencies to install with the package because it installs all the widely-used packages for data analysis and visualisation. You got both the AIRR data analysis framework and the full Data Science package ecosystem with only one command, making immunarch the entry-point for single-cell & immune repertoire Data Science. If the above command doesn’t work for any reason, try installing immunarch directly from its repository: install.packages("devtools") # skip this if you already installed devtools devtools::install_github("immunomind/immunarch") Since releasing on CRAN is limited to one release per one-two months, you can install the latest pre-release version with bleeding edge features and optimisations directly from the code repository. In order to install the latest pre-release version, you need to execute only two commands: install.packages("devtools") # skip this if you already installed devtools devtools::install_github("immunomind/immunarch", ref="dev") You can find the list of releases of immunarch here: http://github.com/immunomind/immunarch/releases The gist of the typical TCR or BCR data analysis workflow can be reduced to the next few lines of code. immunarch data 1) Load the package and the data library(immunarch) # Load the package into R data(immdata) # Load the test dataset 2) Calculate and visualise basic statistics repExplore(immdata$data, "lens") %>% vis() # Visualise the length distribution of CDR3 repClonality(immdata$data, "homeo") %>% vis() # Visualise the relative abundance of clonotypes 3) Explore and compare T-cell and B-cell repertoires repOverlap(immdata$data) %>% vis() # Build the heatmap of public clonotypes shared between repertoires geneUsage(immdata$data[[1]]) %>% vis() # Visualise the V-gene distribution for the first repertoire repDiversity(immdata$data) %>% vis(.by = "Status", .meta = immdata$meta) # Visualise the Chao1 diversity of repertoires, grouped by the patient status library(immunarch) # Load the package into R immdata <- repLoad("path/to/your/data") # Replace it with the path to your data. Immunarch automatically detects the file format. For advanced methods such as clonotype annotation, clonotype tracking, kmer analysis and public repertoire analysis see “Tutorials”. If you can not install devtools, check sections 1 and 2 below. If you run in any other trouble, try the following steps: Check your R version. Run version command in the console to get your R versions. If the R version is below 3.5.0 (for example, R version 3.1.0), try updating your R version to the latest one. Check this this link if you are on Ubuntu. Note: if you try to install a package after the update and it still fails with the following message: ERROR: dependencies ‘httr’, ‘usethis’ are not available for package ‘devtools’ * removing ‘/home/ga/R/x86_64-pc-linux-gnu-library/3.5/devtools’ Warning in install.packages : installation of package ‘devtools’ had non-zero exit status it means that you need to re-install packages that were built under the previous R version. In the above example it would be packages httr and usethis. In order to re-install a package you need to execute the command install.packages("package_name"), where package_name is the name of the package to update. To find packages that need to be re-installed after updating R, you need to look for installation messages like this in the installation process: ERROR: package ‘usethis’ was installed by an R version with different internals; it needs to be reinstalled for use with this R version Check if your packages are outdated and update them. In RStudio you can run the “Update” button on top of the package list in the “Package” window. In R console you can run the old.packages() command to view a list of outdated packages. The following messages indicate that an update is required: Error: package ‘dtplyr’ 0.0.3 was found, but >= 1.0.0 is required by ‘immunarch’ Execution halted ERROR: lazy loading failed for package ‘immunarch’ byte-compile and prepare package for lazy loading Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : namespace 'ggalluvial' 0.9.1 is being loaded, but >= 0.10.0 is required Calls: <Anonymous> ... namespaceImportFrom -> asNamespace -> loadNamespace Execution halted For Mac users. Make sure to install XCode from App Store first and command line developers tools second by executing the following command in Terminal: xcode-select –install For Mac users. If you have issues like old packages can’t be updated, or error messages such as ld: warning: directory not found for option or ld: library not found for -lgfortran, this link will help you to fix the issue. For Mac Mojave (1.14) users. If you run into the following error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/math.h:301:15: fatal error: 'math.h' file not found #include_next <math.h> ^~~~~~~~ Open Terminal, execute the following command and try again to install immunarch: sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target / For Linux users. If you have issues with the igraph library or have Fortran errors such as: ** testing if installed package can be loaded from temporary location Error: package or namespace load failed for 'igraph' in dyn.load(file, DLLpath = DLLpath, ...): unable to load shared object '/usr/local/lib/R/site-library/00LOCK-igraph/00new/igraph/libs/igraph.so': libgfortran.so.4: cannot open shared object file: No such file or directory See this link for help. For Linux users. If you have issues with the rgl package: configure: error: missing required header GL/gl.h ERROR: configuration failed for package ‘rgl’ Install “mesa-common-dev” via OS terminal by executing the following command: apt-get install mesa-common-dev Check this link for more information and other possible workarounds. If you have error messages with rlang in them such as: Error: .onLoad failed in loadNamespace() for 'vctrs', details: call: env_bind_impl(.env, list3(...), "env_bind()", bind = TRUE) error: object 'rlang_env_bind_list' not found Remove the rlang package and install it again. This error is often happens after updating R to a newer version, while rlang not being properly updated. If you have error messages like the following (note the (converted from warning) part): ** byte-compile and prepare package for lazy loading Error: (converted from warning) package 'ggplot2' was built under R version 3.6.1 Execution halted ERROR: lazy loading failed for package 'immunarch' Execute the following command in R and try again to install the package: Sys.setenv(R_REMOTES_NO_ERRORS_FROM_WARNINGS="true") For Windows users. If you have issues with the package installation, or if you want to change the folder for R packages, feel free to check this forum post. For Windows users. Make sure to install Rtools. Before installation close RStudio, install Rtools and re-open it afterwards. To check if Rtools installed correctly, run the devtools::find_rtools() command (after installing the devtools package). If you have an error, check this link for help. If you can not install dependencies for immunarch, please try manual installation of all dependencies by executing the following command in R console: install.packages(c("rematch", "prettyunits", "forcats", "cellranger", "progress", "zip", "backports", "ellipsis", "zeallot", "SparseM", "MatrixModels", "sp", "haven", "curl", "readxl", "openxlsx", "minqa", "nloptr", "RcppEigen", "utf8", "vctrs", "carData", "pbkrtest", "quantreg", "maptools", "rio", "lme4", "labeling", "munsell", "cli", "fansi", "pillar", "viridis", "car", "ellipse", "flashClust", "leaps", "scatterplot3d", "modeltools", "DEoptimR", "digest", "gtable", "lazyeval", "rlang", "scales", "tibble", "viridisLite", "withr", "assertthat", "glue", "magrittr", "pkgconfig", "R6", "tidyselect", "BH", "plogr", "purrr", "ggsci", "cowplot", "ggsignif", "polynom", "fastcluster", "plyr", "abind", "dendextend", "FactoMineR", "mclust", "flexmix", "prabclus", "diptest", "robustbase", "kernlab", "GlobalOptions", "shape", "colorspace", "stringi", "hms", "clipr", "crayon", "httpuv", "mime", "jsonlite", "xtable", "htmltools", "sourcetools", "later", "promises", "gridBase", "RColorBrewer", "yaml", "ggplot2", "dplyr", "dtplyr", "dbplyr", "data.table", "gridExtra", "ggpubr", "pheatma3", "ggrepel", "reshape2", "DBI", "factoextra", "fpc", "circlize", "tidyr", "Rtsne", "readr", "readxl", "shiny", "shinythemes", "treemap", "igraph", "airr", "ggseqlogo", "UpSetR", "stringr", "ggalluvial", "Rcpp")) If you encounter the following error while running the devtools::install_local function: In normalizePath(path.expand(path), winslash, mustWork) : path[1]="path/to/your/folder/with/immunarch.tar.gz": In file.copy(x$path, bundle, recursive = TRUE) : problem copying No such file or directory Check your path to the downloaded package archive file. It should not be “path/to/your/folder/with/immunarch.tar.gz”, but a path on your PC to the downloaded file, e.g., “C:/Users/UserName/Downloads/immunarch.tar.gz” or “/Users/UserName/Downloads/immunarch.tar.gz”.
A library for rendering project templates. Works with localpaths andgit URLs. Your project can include any file and Copiercan dynamically replace values in any kind of text file. It generates a beautiful output and takes care of not overwrite existing files unless instructed to do so. Installation¶ Install Python 3.6.1 or newer (3.8 or newer if you're on Windows). Install Git 2.24 or newer. To use as a CLI app: pipx install copier To use as a library: pip install copier Quick usage¶ Use it in your Python code: from copier import copy # Create a project from a local path copy("path/to/project/template", "path/to/destination") # Or from a git URL. copy("https://github.com/copier-org/copier.git", "path/to/destination") # You can also use "gh:" as a shortcut of "https://github.com/" copy("gh:copier-org/copier.git", "path/to/destination") # Or "gl:" as a shortcut of "https://gitlab.com/" copy("gl:copier-org/copier.git", "path/to/destination") Or as a command-line tool: copier path/to/project/template path/to/destination Browse or tag public templates¶ You can browse public copier templates in GitHub usingthe copier-template topic. Use them asinspiration! If you want your template to appear in that list, just add the topic to it! � Credits¶ Special thanks go to jpscaletti for originally creatingCopier. This project would not be a thing without him. Many thanks to pykong who took over maintainership on the project, promoted it, and laid out the bases of what the project is today. Big thanks also go to Yajo for his relentless zest forimproving Copier even further. Thanks a lot, pawamoy for polishing very important rough edges and improving the documentation and UX a lot.
Loops are a sequence of instructions executed until a condition is satisfied. Let's look at how while loops work in Python. What are loops? If you are learning to code, loops are one of the main concepts you should understand. Loops help you execute a sequence of instructions until a condition is satisfied. There are two major types of loops in Python. For loops While loops Both these types of loops can be used for similar actions. But as you learn to write efficient programs, you will know when to use what. In this article, we will look at while loops in Python. To learn more about for loops, check out this article recently published on freeCodeCamp. While Loops The concept behind a while loop is simple: While a condition is true -> Run my commands. The while loop will check the condition every time, and if it returns "true" it will execute the instructions within the loop. Before we start writing code, let's look at the flowchart to see how it works. Now let's write some code. Here's how you write a simple while loop to print numbers from 1 to 10. #!/usr/bin/python x = 1 while(x <= 10): print(x) x = x+1 If you look at the above code, the loop will only run if x is less than or equal to 10. If you initialise x as 20, the loop will never execute. Here is the output of that while loop: > python script.py12345678910 Do-While Loop There are two variations of the while loop – while and do-While. The difference between the two is that do-while runs at least once. A while loop might not even execute once if the condition is not met. However, do-while will run once, then check the condition for subsequent loops. In spite of being present in most of the popular programming languages, Python does not have a native do-while statement. But you can easily emulate a do-while loop using other approaches, such as functions. Let's try the do-while approach by wrapping up the commands in a function. #!/usr/bin/python x = 20 def run_commands(): x = x+1 print(x) run_commands() while(x <= 10): run_commands() The above code runs the "run_commands()" function once before invoking the while loop. Once the while loop starts, the "run_commands" function will never be executed since x is equal to 20. While - Else You can add an "else" statement to run if the loop condition fails. Let's add an else condition to our code to print "Done" once we have printed the numbers from 1 to 10. #!/usr/bin/python x = 1 while(x <= 10): print(x) x = x+1 else: print("Done") The above code will first print the numbers from 1 to 10. When x is 11, the while condition will fail, triggering the else condition. Single Line While Statement If you only have a single line of code within your while loop, you can use the single line syntax. #!/usr/bin/python x = 1 while (x): print(x) Infinite Loops If you are not careful while writing loops, you will create infinite loops. Infinite loops are the ones where the condition is always true. #!/usr/bin/python x = 1 while (x >= 1): print(x) The above code is an example of an infinite loop. There is no command to alter the value of x, so the condition "x is greater than or equal to 1" is always true. This will make the loop run forever. Always be careful while writing loops. A small mistake can lead to an infinite loop and crash your application. Loop Control Finally, let's look at how to control the flow of a loop while it is running. When you are writing real world applications, you will often encounter scenarios where you need to add additional conditions to skip a loop or to break out of a loop. Break Let's look at how to break out of the loop while the condition is true. #!/usr/bin/python x = 1 while (x <= 10): if(x == 5): break print(x) x += 1 In the above code, the loop will stop execution when x is 5, in spite of x being greater than or equal to 1. Continue Here's another scenario: say you want to skip the loop if a certain condition is met. However, you want to continue subsequent executions until the main while condition turns false. You can use the "continue" keyword for that, like this: #!/usr/bin/python x = 1 while (x <= 10): if(x == 5): x += 1 continue print(x) In the above example, the loop will print from 1 to 10, except 5. When x is 5, the rest of the commands are skipped and the control flow returns to the start of the while program. Summary Loops are one of the most useful components in programming that you will use on a daily basis. For and while are the two main loops in Python. The while loop has two variants, while and do-while, but Python supports only the former. You can control the program flow using the 'break' and 'continue' commands. Always be aware of creating infinite loops accidentally. I regularly write on topics including Artificial Intelligence and Cybersecurity. If you liked this article, you can read my blog here.
以下のフレームワークを使用した開発が一段落した。 次の案件の調査開始。 intra-mart というサーバサイドJavaScriptのパッケージカスタマイズ。 クライアントサイドもJavaScriptでゴリゴリコーディング。 いっそ開発支援用のスクリプトもJavaScriptで書いてしまおうとRhinoをセットアップ。 セットアップ Rhino のダウンロード から、Rhinoのパッケージをダウンロード。 解凍したフォルダ内のjs.jarを適当な場所に置くのだが、Groovyに似せて ~/.rhino/lib に配置してみた。 加えてパスの通った場所に以下のようなBashスクリプトを配置。 rhino $ echo <<'EOF' >rhino #!/usr/bin/env bash libs="" for f in $(\ls ~/.rhino/lib/*.jar); do libs=$libs:$f done java -cp $libs org.mozilla.javascript.tools.shell.Main "${@}" EOF $ chmod 755 rhino 実行 早速実行してみる。 test1.js #!/usr/bin/env rhino print('test') # 先程作成した `rhino` スクリプトの引数にJavaScriptファイルを渡すと実行可能$ rhino test1.jstest# 実行権限を付与すれば、ファイルを指定して実行可能$ chmod 700 ./test1.js$ ./test1.jstest おお!思った以上に実行速度が早くてびっくりした。 commons-lang を呼んでみる。 commons-lang も呼んでみる。 まずはjarをダウンロード ゲットした commons-lang-2.4.jarを~/.rhino/libディレクトリに配置 以下のようなスクリプトを作成し、実行してみる。 test2.js #!/usr/bin/env rhino importPackage(Packages.org.apache.commons.lang) var list = ['1', '2', '3'] var text = StringUtils.join(list, ':') print(text) $ rhino test2.js1:2:3$ chmod 700 ./test2.js$ ./test2.js1:2:3 あ、結構いいかも。気に入ってきた。 test3.js #!/usr/bin/env rhino importPackage(Packages.org.apache.commons.lang) var list = [1, 2, 3] list.each = function (f) { for (var i = 0; i < list.length; i++) f(list[i]) } list.each(function (el) { print("[" + StringUtils.center(el * 2, 10) + "]") }) # スペース10文字の中央に表示される$ ./test3.js[ 2 ][ 4 ][ 6 ] 追記 ~/.rhino/lib にjarファイルを置いていないときは早い。ここに配置するjarファイルが増えるごとにかなりもっさりしてくる。 Groovyの重さも同じような原因か?
Description Given a binary tree, check whether it is a mirror of itself (ie, symmetric around its center). For example, this binary tree [1,2,2,3,4,4,3] is symmetric: 1 / \ 2 2 / \ / \ 3 4 4 3 But the following [1,2,2,null,3,null,3] is not: 1 / \ 2 2 \ \ 3 3 Follow up: Solve it both recursively and iteratively. Explanation The key point is to implement isMirror(TreeNode node1, TreeNode node2) function. There are two scenarios for isMirror(TreeNode node1, TreeNode node2) to return true: when node1 and node2 are both null when node1 and node2 aren’t null, node1 and node2 should have same value and node1’s left should be the mirror for node2’s right and node1’s right should be the mirror node2.left subtrees. Java Solution /** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */ class Solution { public boolean isSymmetric(TreeNode root) { if (root == null) { return true; } return isMirror(root.left, root.right); } private boolean isMirror(TreeNode node1, TreeNode node2) { if (node1 == null && node2 == null) { return true; } if (node1 == null || node2 == null) { return false; } if (node1.val != node2.val) { return false; } return node1.val == node2.val && isMirror(node1.left, node2.right) && isMirror(node1.right, node2.left); } } Python Solution # Definition for a binary tree node. # class TreeNode: # def __init__(self, val=0, left=None, right=None): # self.val = val # self.left = left # self.right = right class Solution: def isSymmetric(self, root: TreeNode) -> bool: if not root: return True if not root.left and not root.right: return True return self.is_mirror(root.left, root.right) def is_mirror(self, root1, root2): if not root1 and not root2: return True if not root1 or not root2 or root1.val != root2.val: return False return self.is_mirror(root1.left, root2.right) and self.is_mirror(root1.right, root2.left) Time complexity: O(N). Because we traverse the entire input tree once, the total run time is O(N), where N is the total number of nodes in the tree. Space complexity: The number of recursive calls is bound by the height of the tree. In the worst case, the tree is linear and the height is in O(N). Therefore, space complexity due to recursive calls on the stack is O(N) in the worst case.
Face API - v1.0 This API is currently available in: Australia East - australiaeast.api.cognitive.microsoft.com Brazil South - brazilsouth.api.cognitive.microsoft.com Canada Central - canadacentral.api.cognitive.microsoft.com Central India - centralindia.api.cognitive.microsoft.com Central US - centralus.api.cognitive.microsoft.com East Asia - eastasia.api.cognitive.microsoft.com East US - eastus.api.cognitive.microsoft.com East US 2 - eastus2.api.cognitive.microsoft.com France Central - francecentral.api.cognitive.microsoft.com Japan East - japaneast.api.cognitive.microsoft.com Japan West - japanwest.api.cognitive.microsoft.com Korea Central - koreacentral.api.cognitive.microsoft.com North Central US - northcentralus.api.cognitive.microsoft.com North Europe - northeurope.api.cognitive.microsoft.com South Africa North - southafricanorth.api.cognitive.microsoft.com South Central US - southcentralus.api.cognitive.microsoft.com Southeast Asia - southeastasia.api.cognitive.microsoft.com UK South - uksouth.api.cognitive.microsoft.com West Central US - westcentralus.api.cognitive.microsoft.com West Europe - westeurope.api.cognitive.microsoft.com West US - westus.api.cognitive.microsoft.com West US 2 - westus2.api.cognitive.microsoft.com UAE North - uaenorth.api.cognitive.microsoft.com PersonGroup Person - List List all persons’ information in the specified person group, including personId, name, userData and persistedFaceIds of registered person faces. Persons are stored in alphabetical order of personId created in PersonGroup Person - Create. "start" parameter (string, optional) is a personId value that returned entries have larger ids by string comparison. "start" set to empty to indicate return from the first item. "top" parameter (int, optional) specifies the number of entries to return. A maximal of 1000 entries can be returned in one call. To fetch more, you can specify "start" with the last returned entry’s personId of the current call. "start=&top=" will return all 5 persons. "start=&top=2" will return "personId1", "personId2". "start=personId2&top=3" will return "personId3", "personId4", "personId5". Http Method GET Select the testing console in the region where you created your resource: West US West US 2 East US East US 2 West Central US South Central US West Europe North Europe Southeast Asia East Asia Australia East Brazil South Canada Central Central India UK South Japan East Central US France Central Korea Central Japan West North Central US South Africa North UAE North Request URL Request parameters personGroupId of the target person group. List persons from the least personId greater than the "start". It contains no more than 64 characters. Default is empty. The number of persons to list, ranging in [1, 1000]. Default is 1000. Request headers Request body Response 200 A successful call returns an array of person information that belong to the person group. JSON fields in response body: Fields Type Description personId String personId of the person in the person group. name String Person's display name. userData String User-provided data attached to the person. persistedFaceIds Array persistedFaceId array of registered faces of the person. [ { "personId": "25985303-c537-4467-b41d-bdb45cd95ca1", "name": "Ryan", "userData": "User-provided data attached to the person.", "persistedFaceIds": [ "015839fb-fbd9-4f79-ace9-7675fc2f1dd9", "fce92aed-d578-4d2e-8114-068f8af4492e", "b64d5e15-8257-4af2-b20a-5a750f8940e7" ] }, { "personId": "2ae4935b-9659-44c3-977f-61fac20d0538", "name": "David", "userData": "User-provided data attached to the person.", "persistedFaceIds": [ "30ea1073-cc9e-4652-b1e3-d08fb7b95315", "fbd2a038-dbff-452c-8e79-2ee81b1aa84e" ] } ] Response 401 Error code and message returned in JSON: Error Code Error Message Description Unspecified Invalid subscription Key or user/plan is blocked. { "error": { "code": "Unspecified", "message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key." } } Response 403 { "error": { "statusCode": 403, "message": "Out of call volume quota. Quota will be replenished in 2 days." } } Response 404 Error code and message returned in JSON: Error Code Error Message Description PersonGroupNotFound Person group ID is invalid. Valid format should be a string composed by numbers, English letters in lower case, '-', '_', and no longer than 64 characters. PersonGroupNotFound Person group is not found. { "error": { "code": "PersonGroupNotFound", "message": "Person group is not found." } } Response 409 { "error": { "code": ConcurrentOperationConflict, "message": "There is a conflict operation on requested resource, please try later." } } Response 429 { "error": { "statusCode": 429, "message": "Rate limit is exceeded. Try again in 26 seconds." } } Code samples @ECHO OFF curl -v -X GET "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons?start={string}&top=1000" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}" using System; using System.Net.Http.Headers; using System.Text; using System.Net.Http; using System.Web; namespace CSHttpClientSample { static class Program { static void Main() { MakeRequest(); Console.WriteLine("Hit ENTER to exit..."); Console.ReadLine(); } static async void MakeRequest() { var client = new HttpClient(); var queryString = HttpUtility.ParseQueryString(string.Empty); // Request headers client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request parameters queryString["start"] = "{string}"; queryString["top"] = "1000"; var uri = "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons?" + queryString; var response = await client.GetAsync(uri); } } } // // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) import java.net.URI; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.utils.URIBuilder; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; public class JavaSample { public static void main(String[] args) { HttpClient httpclient = HttpClients.createDefault(); try { URIBuilder builder = new URIBuilder("https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons"); builder.setParameter("start", "{string}"); builder.setParameter("top", "1000"); URI uri = builder.build(); HttpGet request = new HttpGet(uri); request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request body StringEntity reqEntity = new StringEntity("{body}"); request.setEntity(reqEntity); HttpResponse response = httpclient.execute(request); HttpEntity entity = response.getEntity(); if (entity != null) { System.out.println(EntityUtils.toString(entity)); } } catch (Exception e) { System.out.println(e.getMessage()); } } } <!DOCTYPE html> <html> <head> <title>JSSample</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script> </head> <body> <script type="text/javascript"> $(function() { var params = { // Request parameters "start": "{string}", "top": "1000", }; $.ajax({ url: "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons?" + $.param(params), beforeSend: function(xhrObj){ // Request headers xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}"); }, type: "GET", // Request body data: "{body}", }) .done(function(data) { alert("success"); }) .fail(function() { alert("error"); }); }); </script> </body> </html> #import <Foundation/Foundation.h> int main(int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSString* path = @"https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons"; NSArray* array = @[ // Request parameters @"entities=true", @"start={string}", @"top=1000", ]; NSString* string = [array componentsJoinedByString:@"&"]; path = [path stringByAppendingFormat:@"?%@", string]; NSLog(@"%@", path); NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]]; [_request setHTTPMethod:@"GET"]; // Request headers [_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"]; // Request body [_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]]; NSURLResponse *response = nil; NSError *error = nil; NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error]; if (nil != error) { NSLog(@"Error: %@", error); } else { NSError* error = nil; NSMutableDictionary* json = nil; NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding]; NSLog(@"%@", dataString); if (nil != _connectionData) { json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error]; } if (error || !json) { NSLog(@"Could not parse loaded json with error:%@", error); } NSLog(@"%@", json); _connectionData = nil; } [pool drain]; return 0; } <?php // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) require_once 'HTTP/Request2.php'; $request = new Http_Request2('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons'); $url = $request->getUrl(); $headers = array( // Request headers 'Ocp-Apim-Subscription-Key' => '{subscription key}', ); $request->setHeader($headers); $parameters = array( // Request parameters 'start' => '{string}', 'top' => '1000', ); $url->setQueryVariables($parameters); $request->setMethod(HTTP_Request2::METHOD_GET); // Request body $request->setBody("{body}"); try { $response = $request->send(); echo $response->getBody(); } catch (HttpException $ex) { echo $ex; } ?> ########### Python 2.7 ############# import httplib, urllib, base64 headers = { # Request headers 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.urlencode({ # Request parameters 'start': '{string}', 'top': '1000', }) try: conn = httplib.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("GET", "/face/v1.0/persongroups/{personGroupId}/persons?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### ########### Python 3.2 ############# import http.client, urllib.request, urllib.parse, urllib.error, base64 headers = { # Request headers 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.parse.urlencode({ # Request parameters 'start': '{string}', 'top': '1000', }) try: conn = http.client.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("GET", "/face/v1.0/persongroups/{personGroupId}/persons?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### require 'net/http' uri = URI('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons') uri.query = URI.encode_www_form({ # Request parameters 'start' => '{string}', 'top' => '1000' }) request = Net::HTTP::Get.new(uri.request_uri) # Request headers request['Ocp-Apim-Subscription-Key'] = '{subscription key}' # Request body request.body = "{body}" response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http| http.request(request) end puts response.body
This documentation is not for the latest stable Salvus version. We consider a spatial domain (d = 2 or 3), a time interval , and a diffusion equation of the following form: with initial conditions . Here, denotes the space- and time-dependent diffusive field and are describes external forces. denotes the first time derivative and the spatial gradient operator. Furthermore, the scalar parameter and the symmetric second-order diffusion tensor are space-dependent coefficients. can be related to a Wiener process using the relation which direction-dependent smoothing lengths . For the special case of and , corresponds to the standard deviation of the Gaussian smoothing in meters. In the isotropic case, simplifies to a scalar value, in which case we may re-write the diffusion equation as with and the isotropic smoothing length . %config Completer.use_jedi = False # Standard Python packages import matplotlib.pyplot as plt import numpy as np import os import toml # Salvus imports from salvus.mesh.structured_grid_2D import StructuredGrid2D from salvus.mesh.unstructured_mesh import UnstructuredMesh import salvus.flow.api import salvus.flow.simple_config as sc SALVUS_FLOW_SITE_NAME = os.environ.get("SITE_NAME", "token") sg = StructuredGrid2D.rectangle(nelem_x=40, nelem_y=60, max_x=4.0, max_y=6.0) mesh = sg.get_unstructured_mesh() mesh.find_side_sets("cartesian") input_mesh = mesh.copy() input_mesh.attach_field("some_field", np.random.randn(mesh.npoint)) input_mesh.map_nodal_fields_to_element_nodal() input_mesh.write_h5("initial_values.h5") input_mesh
TensorFlow 1 version View source on GitHub Computes the crossentropy loss between the labels and predictions. Inherits From: Loss tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False, reduction=losses_utils.ReductionV2.AUTO, name='sparse_categorical_crossentropy' ) Used in the notebooks Used in the guide Used in the tutorials Use this crossentropy loss function when there are two or more label classes.We expect labels to be provided as integers. If you want to provide labelsusing one-hot representation, please use CategoricalCrossentropy loss.There should be # classes floating point values per feature for y_predand a single floating point value per feature for y_true. In the snippet below, there is a single floating point value per example fory_true and # classes floating pointing values per example for y_pred.The shape of y_true is [batch_size] and the shape of y_pred is[batch_size, num_classes]. Standalone usage: y_true = [1, 2] y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] # Using 'auto'/'sum_over_batch_size' reduction type. scce = tf.keras.losses.SparseCategoricalCrossentropy() scce(y_true, y_pred).numpy() 1.177 # Calling with 'sample_weight'. scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy() 0.814 # Using 'sum' reduction type. scce = tf.keras.losses.SparseCategoricalCrossentropy( reduction=tf.keras.losses.Reduction.SUM) scce(y_true, y_pred).numpy() 2.354 # Using 'none' reduction type. scce = tf.keras.losses.SparseCategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE) scce(y_true, y_pred).numpy() array([0.0513, 2.303], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.SparseCategoricalCrossentropy()) Args from_logits Whether y_pred is expected to be a logits tensor. Bydefault, we assume that y_pred encodes a probability distribution.**Note - Using from_logits=True may be more numerically stable. reduction (Optional) Type of tf.keras.losses.Reduction to apply toloss. Default value is AUTO. AUTO indicates that the reductionoption will be determined by the usage context. For almost all casesthis defaults to SUM_OVER_BATCH_SIZE. When used withtf.distribute.Strategy, outside of built-in training loops such astf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZEwill raise an error. Please see this custom training tutorialfor more details. name Optional name for the op. Defaults to 'sparse_categorical_crossentropy'. Methods from_config @classmethodfrom_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config get_config() Returns the config dictionary for a Loss instance. __call__ __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], exceptsparse loss functions such as sparse categorical crossentropy whereshape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as acoefficient for the loss. If a scalar is provided, then the loss issimply scaled by the given value. If sample_weight is a tensor of size[batch_size], then the total loss for each sample of the batch isrescaled by the corresponding element in the sample_weight vector. Ifthe shape of sample_weight is [batch_size, d0, .. dN-1] (or can bebroadcasted to this shape), then each loss element of y_pred is scaledby the corresponding value of sample_weight. (Note ondN-1: all lossfunctions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this hasshape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
h-entry? Like h-card, h-entry provides an attribute vocabulary. While h-card focuses on people and organizations, h-entry describes shared content — blog posts and comments in particular, but you could expand it as far as you like. Want to generate a feed of git commits? You could use h-entry to describe a commit! You totally can! I plan to examine Webmention — the mechanism behind replies, likes, reposts, etc. They’re the fun conversation part of IndieWeb after all. But I need to make sure that when I get to the conversation I have a clear understanding of who is taking part — the h-cards — and where the discussions take place — the h-entries. But you don’t need to wait for me. There are fine tutorials out there to walk you through the process. https://IndieWebify.me in particular tells you everything you need to know. Fine. Let’s get on with it IndieWeb entries identify themselves with the h-entry class.e-content marks the content of the entry. You could always mark thesame element as both. In fact that’s basically what I’ve been doing fora while. I’m trying to move away from that though. Let’s give it a little structure. <article class="h-entry"> <header> ... metadata like title and tags ... </header> <section class="e-content"> ... my insightful post ... </section> <footer> ... supplemental content like social links ... </footer> </article> Time to focus on putting useful metadata in the article header. Might as well expose some of the Hugo templating as well. layouts/_default/single.html {{ define "main" }} <article class="h-entry"> <header> {{ .Render "article-header" }} </header> <section class="e-content"> {{ .Content }} </section> <footer> {{ .Render "social" }} </article> {{ end }} The bare minimum For IndieWeb purposes, we need to know at least two things about every entry: u-url where it was published dt-published when it was published I’ll put both in atimeelement. layouts/post/article-header.html <time class="dt-published" datetime="{{ .Format $.Site.Params.TimestampForm }}"> <a class="u-url" href="{{ .Permalink }}"> {{ .Format $.Site.Params.DateForm }} </a> </time> $TimestampFormis set inconfig.tomlas"2006-01-02T15:04:00-07:00" $DateFormis set to"Monday, January 2, 2006" time lets me include a machine-readable timestamp and a human-readabledate string. I play a lot with what I consider “human-readable,” so aconsistent format for machines is good. My blog follows mundane convention, assigning a title to every post. Ialso like to add a description to clarify the topic. These are goodcandidates for p-name and p-summary. <h1 class="p-name">{{ .Title }}</h1> {{- with .Params.Description -}} <p class="p-summary">{{ . | markdownify }}</p> {{- end -}} ⋮ Let’s see that in action with my post on weighing files in Python. Who wrote this, anyways? Seems a bit silly on my single-author site, but explicit authorshipdoes make things clearer to casual visitors. Fortunately I have a canonical h-card that I can link to. ⋮ — by <a class="p-author h-card" rel="author" href="{{ .Site.BaseURL }}">{{ .Site.Author.name }}</a> How do I classify my entry? Now to sprinkle some p-category items in to help folks understandwhere the post fits with the rest of my site. I organize my Hugo content bytype — currentlyNote or Post — and then add optional details withcategories and tags. The post should probablyshow each of those as a p-category. {{- with .Type -}} <br> <a class="p-category" href="/{{ . | urlize }}">{{ . | title }}</a> {{- end -}} {{ with .Params.category }} — <a class="p-category" href="/categories/{{ . | urlize }}">{{ . | title }}</a> {{ end }} {{ with .Params.tags }} — {{ range . }} <a class="p-category tag" href="/tags/{{ . | urlize }}">{{ . }}</a> {{ end }} {{ end }} What about cover images? Many — but not all — of my posts include a cover image. Cover imagesshould almost definitely be u-photo. There’s a lot of imageprocessing with it though. To make a long story short — toolate! — I’ll just show the microformat-specific addition. ⋮ <img {{ if $isCover }}class="u-photo"{{ end }} ⋮ Yep, that’s a post header all right. What about validation? Did I get the microformats right? Examining my microformats locally I know I can validate myh-entry at IndieWebify or copy and paste to https://microformats.io,but I want to look at this stuff from the shell. Preferably with asingle command. Ideally with something I can stash in mytasks.py file. mfpy and mf2util provide microformats2 handling for Python code. I mainly want a dump of microformats found in a given URL, in a format easier for me to read than JSON. Here’s what I came up with. I got carried away. This could have been its own post. Oh well. It’s like a two-for-one deal! import json import textwrap from invoke import task import mf2py import mf2util from ruamel.yaml import YAML import toml I need different formats for different purposes, so I import Python libraries for YAML and TOML along with the standard library JSON support. def shorten_properties(d, width=80): """Find text in `d`, shortening it to fit in `width` columns""" if d is None: return if isinstance(d, dict): for key, value in d.items(): d[key] = shorten_properties(value) elif isinstance(d, list): d = [ shorten_properties(i) for i in d ] elif isinstance(d, str): d = textwrap.shorten(d, width=width) return d Sometimes microformat info is a wall of text. Quite often, in fact,since e-content includes the full content of any post.shorten_properties usestextwrap to keeplarge text properties from overwhelming me. Now that I have the support code I need, it’s time for the Pyinvoke task. @task( help={ "url": "Web address to examine", "format": "preferred output format", "interpret": "whether to interpret the parsed entries", "everything": "whether to display items only or everything parsed", "shorten": "whether to shorten text found to 80 characters", } ) def mf2(c, url, format="json", interpret=False, everything=False, shorten=True): """Display any microformats2 data from `url`""" entry = mf2py.parse(url=url) wants_json = format == "json" # Usually I just care about the h-* items if not everything: entry = {"items": entry["items"]} # Sometimes I want mf2util's summarized version if interpret: entry = mf2util.interpret(entry, url, want_json=wants_json) # I usually don't want a wall of text if shorten: entry = shorten_properties(entry) if format == "yml": YAML().dump(entry, sys.stdout) elif format == "toml": print(toml.dumps(entry)) elif format == "json": print(json.dumps(entry, sort_keys=True, indent=2)) else: raise KeyError(f"Unknown format '{format}' requested") I could have made this a small script, but I’m pretty sure I’ll check microformats routinely while working on the site. Makes sense to have it readily available. Let’s try out my new mf2 task. $ invoke mf2 http://localhost:1313/2019/06/01/weighing-files-with-python/ -f yml items: - type: - h-entry properties: name: - Weighing Files With Python summary: - I want to optimize this site’s file sizes, but first I should see if I need to. published: - '2019-06-01T00:00:00+00:00' url: - http://localhost:1313/2019/06/01/weighing-files-with-python/ author: - type: - h-card properties: name: - Brian Wisti url: - http://localhost:1313/ value: Brian Wisti category: - Post - Programming - python - site - files photo: - http://localhost:1313/2019/06/01/weighing-files-with-python/cover.png content: - html: <div class="sidebarblock"> <div class="content"> <div [...] value: Updates 2019-06-02 adjusted a couple clumsy property methods with [...] syndication: - https://hackers.town/@randomgeek/102199106551447993 - https://twitter.com/brianwisti/status/1134977256684761089 What about default JSON output and letting mf2util interpret the results? $ inv mf2 http://localhost:1313/2019/06/01/weighing-files-with-python -i { "author": { "name": "Brian Wisti", "url": "http://localhost:1313/" }, "content": "<div class=\"sidebarblock\"> <div class=\"content\"> <div [...]", "content-plain": "Updates 2019-06-02 adjusted a couple clumsy property methods with [...]", "name": "Weighing Files With Python", "photo": "http://localhost:1313/2019/06/01/weighing-files-with-python/cover.png", "published": "2019-06-01T00:00:00+00:00", "summary": "I want to optimize this site\u2019s file sizes, but first I should see if I need to.", "syndication": [ "https://twitter.com/brianwisti/status/1134977256684761089", "https://hackers.town/@randomgeek/102199106551447993" ], "type": "entry", "url": "http://localhost:1313/2019/06/01/weighing-files-with-python/" } Nice. I can tidy it up a bit later. Probably end up using thosemf2util functions. But this works great for now. And my h-entry looksgood! Examine microformats on other sites Oh hey I can grab any URL. This handles another issue I had: trying to examine microformats on other sites. Let’s grab Jacky Alciné’s h-card! $ inv mf2 https://v2.jacky.wtf -f toml<metadata lang=TOML prob=0.97 /> [[items]] type = [ "h-card",] [items.properties] name = [ "Jacky Alciné",] photo = [ "https://v2.jacky.wtf/media/profile-image",] url = [ "https://v2.jacky.wtf",] [[items]] type = [ "h-feed",] [[items.children]] type = [ "h-entry",] [items.children.properties] author = [ "https://v2.jacky.wtf",] url = [ "https://v2.jacky.wtf/post/a53bb7c4-2831-4666-ad85-75433ab2b1c3",] published = [ "2020-04-26T08:57:39-07:00",] [[items.children.properties.in-reply-to]] type = [ "h-cite",] value = "https://twitter.com/tiffani/status/1254438450897530882" [items.children.properties.in-reply-to.properties] url = [ "https://twitter.com/tiffani/status/1254438450897530882",] [[items.children.properties.in-reply-to.properties.author]] type = [ "h-card",] value = "https://twitter.com/tiffani" [items.children.properties.in-reply-to.properties.author.properties] name = [ "Tiffani Ashley Bell",] url = [ "https://twitter.com/tiffani",] [[items.children.properties.in-reply-to.properties.content]] html = "Definitely need to take a long walk today. Staying in the house all day is [...]" value = "Definitely need to take a long walk today. Staying in the house all day is [...]" [[items.children.properties.content]] html = "<p>Just came back from one and I felt so much better about this with the [...]" value = "Just came back from one and I felt so much better about this with the way [...]" [items.properties] name = [ "Last Note",] uid = [ "https://v2.jacky.wtf/stream",] url = [ "https://v2.jacky.wtf/stream",] author = [ "https://v2.jacky.wtf",] Neat. Now I can collect more h-cards for a blogroll idea I had. Better post this first.
MagicMirror² v2.14.0 is available!For more information about this release, check out this topic. child_process won't execute I’m trying to execute a python script using the child_process. However my function doesn’t seem to be triggering. const spawn = require("child_process").spawn var NodeHelper = require("node_helper") const process = spawn("python3", ["return_something.py"]) module.exports = NodeHelper.create({ start: function() { this.countDown = 10000000 }, socketNotificationReceived: function(notification, payload) { switch(notification){ case "DO_YOUR_JOB": console.log(payload) this.sendSocketNotification("I_DID", (this.countDown - payload)) break case "RETRIEVE_DATA": console.log(payload) this.job() break } }, job: function(){ console.log("I'm trying to retrieve data") process.stdout.on('data', (data)=>{ //********everything works up till here console.log("inside") var result = String.fromCharCode.apply(null, new Uint16Array((data))) this.sendSocketNotification("DATA_RETRIEVED", result) }) } }) Python script. import sys print("Hello, I'm Amira") sys.stdout.flush() how about like this const spawn = require("child_process").spawn var NodeHelper = require("node_helper") module.exports = NodeHelper.create({ start: function() { this.countDown = 10000000 }, socketNotificationReceived: function(notification, payload) { switch(notification){ case "DO_YOUR_JOB": console.log(payload) this.sendSocketNotification("I_DID", (this.countDown - payload)) break case "RETRIEVE_DATA": console.log(payload) this.job() break } }, job: function(){ console.log("I'm trying to retrieve data") var process = spawn("python3", ["return_something.py"]) process.stdout.on('data', (data)=>{ //********everything works up till here console.log("inside") var result = String.fromCharCode.apply(null, new Uint16Array((data))) this.sendSocketNotification("DATA_RETRIEVED", result) }) } }) @sdetweil thanks for the response. that doesn’t seem to be working either. Is there something I need to install to be able to use stdout? @Temisola1 i will look at it in the morning… you are seeing the “I’m trying to retrieve data” in the terminal window where you did npm start, right? @sdetweil That is correct. It works up till that point. @Temisola1 if you run this does it work from the terminal window? import sys # Takes first name and last name via command # line arguments and then display them print("Output from Python") print("First name: " ) print("Last name: " ) testit.js var spawn = require("child_process").spawn; // Parameters passed in spawn - // 1. type_of_script // 2. list containing Path of the script // and arguments for the script // E.g : http://localhost:3000/name?firstname=Mike&lastname=Will // so, first name = Mike and last name = Will var process = spawn('python3',["./testit.py"]); // req.query.firstname, // req.query.lastname] ); // Takes stdout data from script which executed // with arguments and send this data to res object process.stdout.on('data', (data)=> { console.log("received " +data); } ) works for me… (note python script path is ./, make sure u have right path to py file) then do node testit.js @sdetweil So I tried putting in the full path in my node_helper as opposed to relative path and that seemed to work. is there a way I can log the current directory in nodejs. It seems that’s the issue @sdetweil after running successfully a few times it now returns Buffer 48 45 5c… or something that looks like hexadecimal code @Temisola1 the node_helper doesn’t know where it is… but the Modulename does… this.path so, you can add that to the config info you send down in the typical sendSocketNotification(“somevalue”, this.config) to pass parameters to the node_helper
We are a Swiss Army knife for your files Transloadit is a service for companies with developers. We handle their file uploads and media processing. This means that they can save on development time and the heavy machinery that is required to handle big volumes in an automated way. We pioneered with this concept in 2009 and have made our customers happy ever since. We are still actively improving our service in 2021, as well as our open source projects uppy.io and tus.io, which are changing how the world does file uploading. Encode blur-out effect into video file You can encode some spectacular effects in your video files using our /video/encode Robot. In particular, using our ffmpeg parameter, you can add special effects outside the bounds of our traditional API options. By doing so, we're going to show how you can create a blur-out fade effect. First, let's encode our video with our ipad-high preset. This will bring numerous encoding properties to our video file such as bitrates and codecs used. For the full list of properties brought over by our presets, please check out our Preset documentation. Because using presets brings over a series of properties, which work in part with FFmpeg, if you attempt to add FFmpeg's filter_complex filter, your assembly will result in an error due to conflicting filters. To bypass this, we simply pass our first step to a second step, with a preset value of "empty". Now with all that out of the way, we can look at how our desired effect is achieved. The general idea is that we use filter_complex's ability to take multiple effects and pipe them into one another. Using this premise, we will need to use the split filter to break our input video into two layers, base, and blurred. We create our stream with a blurred effect using the boxblur filter which gives three ways to tune our blurred stream: luma - controls brightness, chroma - controls color, and alpha - controls transparency. Then we add in our fade effect, using the overlay filter to layer the two effects on top of each other as the blurred effect is being faded in. We hope this has been a good showcase of just how powerful FFmpeg can be in upping your encoding game. Happy Transcoding! WarningIt seems your browser does not send the referer, which we need to stop people from (ab)using our demos in other websites. If you want to use the demos, please allow your browser to send its referer to us. Adding us to the whitelist of blockers usually helps. 1. Handle uploads We can handle uploads of your users directly. Learn more › WarningIt seems your browser does not support the codec used in this video of the demo. For demo simplicity we'll link you to the original file, but you may also want to learn how to make videos compatible for all browsers. 2. Transcode videos to iPad (high quality) (H.264) We offer a variety of video encoding features like optimizing for different devices, merging, injecting ads, changing audio tracks, or adding company logos. Learn more › WarningIt seems your browser does not support the codec used in this video of the demo. For demo simplicity we'll link you to the original file, but you may also want to learn how to make videos compatible for all browsers. 3. Transcode videos to Original Codec Settings WarningIt seems your browser does not support the codec used in this video of the demo. For demo simplicity we'll link you to the original file, but you may also want to learn how to make videos compatible for all browsers. 4. Export files to Amazon S3 We export to the storage platform of your choice. Learn more › Once all files have been exported, we can ping a URL of your choice with the Assembly status JSON. Build this in your own language { ":original": { "robot": "/upload/handle" }, "encode": { "use": ":original", "robot": "/video/encode", "ffmpeg_stack": "v3.3.3", "result": true, "preset": "ipad-high" }, "blur-fade": { "use": "encode", "robot": "/video/encode", "ffmpeg_stack": "v3.3.3", "result": true, "ffmpeg": { "filter_complex": "[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]", "map": "[blurout]" }, "preset": "empty" }, "exported": { "use": [ ":original", "encode", "blur-fade" ], "robot": "/s3/store", "credentials": "YOUR_AWS_CREDENTIALS", "url_prefix": "https://demos.transloadit.com/" } } # Prerequisites: brew install curl jq || sudo apt install curl jq # To avoid tampering, use Signature Authentication echo '{ "auth": { "key": "YOUR_TRANSLOADIT_KEY" }, "steps": { ":original": { "robot": "/upload/handle" }, "encode": { "use": ":original", "robot": "/video/encode", "ffmpeg_stack": "v3.3.3", "result": true, "preset": "ipad-high" }, "blur-fade": { "use": "encode", "robot": "/video/encode", "ffmpeg_stack": "v3.3.3", "result": true, "ffmpeg": { "filter_complex": "[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]", "map": "[blurout]" }, "preset": "empty" }, "exported": { "use": [ ":original", "encode", "blur-fade" ], "robot": "/s3/store", "credentials": "YOUR_AWS_CREDENTIALS", "url_prefix": "https://demos.transloadit.com/" } } }' |curl \ --request POST \ --form 'params=<-' \ --form my_file1=@./big-buck-bunny-5s.mp4 \ https://api2.transloadit.com/assemblies \ |jq // Add 'Transloadit' to your Podfile, run 'pod install', add credentials to 'Info.plist' import Arcane import TransloaditKit // Set Encoding Instructions var AssemblySteps: Array = Array<Step>() // An array to hold the Steps var Step1 = Step (key: ":original") // Create a Step object Step1?.setValue("/upload/handle", forOption: "robot") // Add the details AssemblySteps.append(Step1) // Add the Step to the array var Step2 = Step (key: "encode") // Create a Step object Step2?.setValue(":original", forOption: "use") // Add the details Step2?.setValue("/video/encode", forOption: "robot") // Add the details Step2?.setValue(true, forOption: "result") // Add the details Step2?.setValue("v3.3.3", forOption: "ffmpeg_stack") // Add the details Step2?.setValue("ipad-high", forOption: "preset") // Add the details AssemblySteps.append(Step2) // Add the Step to the array var Step3 = Step (key: "blur-fade") // Create a Step object Step3?.setValue("encode", forOption: "use") // Add the details Step3?.setValue("/video/encode", forOption: "robot") // Add the details Step3?.setValue(true, forOption: "result") // Add the details Step3?.setValue({"filter_complex":"[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]","map":"[blurout]"}, forOption: "ffmpeg") // Add the details Step3?.setValue("v3.3.3", forOption: "ffmpeg_stack") // Add the details Step3?.setValue("empty", forOption: "preset") // Add the details AssemblySteps.append(Step3) // Add the Step to the array var Step4 = Step (key: "exported") // Create a Step object Step4?.setValue([":original","encode","blur-fade"], forOption: "use") // Add the details Step4?.setValue("/s3/store", forOption: "robot") // Add the details Step4?.setValue("YOUR_AWS_CREDENTIALS", forOption: "credentials") // Add the details Step4?.setValue("https://demos.transloadit.com/", forOption: "url_prefix") // Add the details AssemblySteps.append(Step4) // Add the Step to the array // We then create an Assembly Object with the Steps and files var MyAssembly: Assembly = Assembly(steps: AssemblySteps, andNumberOfFiles: 1) // Add files to upload MyAssembly.addFile("./big-buck-bunny-5s.mp4") // Start the Assembly Transloadit.createAssembly(MyAssembly) // Fires after your Assembly has completed transloadit.assemblyStatusBlock = {(_ completionDictionary: [AnyHashable: Any]) -> Void in print("\(completionDictionary.description)") } <body> <form action="/uploads" enctype="multipart/form-data" method="POST"> <input type="file" name="my_file" multiple="multiple" /> </form> <script src="//ajax.googleapis.com/ajax/libs/jquery/3.2.0/jquery.min.js"></script> <script src="//assets.transloadit.com/js/jquery.transloadit2-v3-latest.js"></script> <script type="text/javascript"> $(function() { $('form').transloadit({ wait: true, triggerUploadOnFileSelection: true, params: { auth: { // To avoid tampering use signatures: // https://transloadit.com/docs/api/#authentication key: 'YOUR_TRANSLOADIT_KEY', }, // It's often better store encoding instructions in your account // and use a `template_id` instead of adding these steps inline steps: { ':original': { robot: '/upload/handle' }, encode: { use: ':original', robot: '/video/encode', result: true, ffmpeg_stack: 'v3.3.3', preset: 'ipad-high' }, blur-fade: { use: 'encode', robot: '/video/encode', result: true, ffmpeg: {'filter_complex':'[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]','map':'[blurout]'}, ffmpeg_stack: 'v3.3.3', preset: 'empty' }, exported: { use: [':original','encode','blur-fade'], robot: '/s3/store', credentials: 'YOUR_AWS_CREDENTIALS', url_prefix: 'https://demos.transloadit.com/' } } } }); }); </script> </body> <!-- This pulls Uppy from our CDN. Alternatively use `npm i @uppy/robodog --save` --> <!-- if you want smaller self-hosted bundles and/or to use modern JavaScript --> <link href="//releases.transloadit.com/uppy/robodog/v1.6.7/robodog.min.css" rel="stylesheet"> <script src="//releases.transloadit.com/uppy/robodog/v1.6.7/robodog.min.js"></script> <button id="browse">Select Files</button> <script> document.getElementById('browse').addEventListener('click', function () { var uppy = window.Robodog.pick({ providers: [ 'instagram', 'url', 'webcam', 'dropbox', 'google-drive', 'facebook', 'onedrive' ], waitForEncoding: true, params: { // To avoid tampering, use Signature Authentication auth: { key: 'YOUR_TRANSLOADIT_KEY' }, // To hide your `steps`, use a `template_id` instead steps: { ':original': { robot: '/upload/handle' }, encode: { use: ':original', robot: '/video/encode', result: true, ffmpeg_stack: 'v3.3.3', preset: 'ipad-high' }, blur-fade: { use: 'encode', robot: '/video/encode', result: true, ffmpeg: {'filter_complex':'[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]', 'map':'[blurout]'}, ffmpeg_stack: 'v3.3.3', preset: 'empty' }, exported: { use: [':original', 'encode', 'blur-fade'], robot: '/s3/store', credentials: 'YOUR_AWS_CREDENTIALS', url_prefix: 'https://demos.transloadit.com/' } } } }).then(function (bundle) { // Due to `waitForEncoding: true` this is fired after encoding is done. // Alternatively, set `waitForEncoding` to `false` and provide a `notify_url` // for Async Mode where your back-end receives the encoding results // so that your user can be on their way as soon as the upload completes. console.log(bundle.transloadit) // Array of Assembly Statuses console.log(bundle.results) // Array of all encoding results }).catch(console.error) }) </script> // yarn add transloadit || npm i transloadit --save-exact const Transloadit = require('transloadit') const transloadit = new Transloadit({ authKey: 'YOUR_TRANSLOADIT_KEY', authSecret: 'YOUR_TRANSLOADIT_SECRET' }) // Set Encoding Instructions const options = { params: { steps: { ':original': { robot: '/upload/handle', }, encode: { use: ':original', robot: '/video/encode', result: true, ffmpeg_stack: 'v3.3.3', preset: 'ipad-high', }, blur-fade: { use: 'encode', robot: '/video/encode', result: true, ffmpeg: {'filter_complex':'[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]','map':'[blurout]'}, ffmpeg_stack: 'v3.3.3', preset: 'empty', }, exported: { use: [':original','encode','blur-fade'], robot: '/s3/store', credentials: 'YOUR_AWS_CREDENTIALS', url_prefix: 'https://demos.transloadit.com/', }, } } } // Add files to upload transloadit.addFile('myfile_1', './big-buck-bunny-5s.mp4') // Start the Assembly transloadit.createAssembly(options, (err, result) => { if (err) { throw err } console.log({result}) }) [sudo] npm install transloadify -g export TRANSLOADIT_KEY="YOUR_TRANSLOADIT_KEY" export TRANSLOADIT_SECRET="YOUR_TRANSLOADIT_SECRET" # Save Encoding Instructions echo '{ ":original": { "robot": "/upload/handle" }, "encode": { "use": ":original", "robot": "/video/encode", "ffmpeg_stack": "v3.3.3", "result": true, "preset": "ipad-high" }, "blur-fade": { "use": "encode", "robot": "/video/encode", "ffmpeg_stack": "v3.3.3", "result": true, "ffmpeg": { "filter_complex": "[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]", "map": "[blurout]" }, "preset": "empty" }, "exported": { "use": [ ":original", "encode", "blur-fade" ], "robot": "/s3/store", "credentials": "YOUR_AWS_CREDENTIALS", "url_prefix": "https://demos.transloadit.com/" } }' > ./steps.json transloadify \ --input "./big-buck-bunny-5s.mp4" \ --output "./output.example" \ --steps "./steps.json" // composer require transloadit/php-sdk use transloadit\Transloadit; $transloadit = new Transloadit([ "key" => "YOUR_TRANSLOADIT_KEY", "secret" => "YOUR_TRANSLOADIT_SECRET", ]); // Add files to upload $files = []; array_push($files, "./big-buck-bunny-5s.mp4") // Start the Assembly $response = $transloadit->createAssembly([ "files" => $files, "params" => [ "steps" => [ ":original" => [ "robot" => "/upload/handle", ], "encode" => [ "use" => ":original", "robot" => "/video/encode", "result" => true, "ffmpeg_stack" => "v3.3.3", "preset" => "ipad-high", ], "blur-fade" => [ "use" => "encode", "robot" => "/video/encode", "result" => true, "ffmpeg" => [ "filter_complex" => "[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]", "map" => "[blurout]", ], "ffmpeg_stack" => "v3.3.3", "preset" => "empty", ], "exported" => [ "use" => [":original", "encode", "blur-fade"], "robot" => "/s3/store", "credentials" => "YOUR_AWS_CREDENTIALS", "url_prefix" => "https://demos.transloadit.com/", ], ], ], ]); # gem install transloadit transloadit = Transloadit.new( :key => "YOUR_TRANSLOADIT_KEY", :secret => "YOUR_TRANSLOADIT_SECRET" ) # Set Encoding Instructions :original = transloadit.step ":original", "/upload/handle", ) encode = transloadit.step "encode", "/video/encode", :use => ":original", :result => true, :ffmpeg_stack => "v3.3.3", :preset => "ipad-high" ) blur-fade = transloadit.step "blur-fade", "/video/encode", :use => "encode", :result => true, :ffmpeg => {"filter_complex":"[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]","map":"[blurout]"}, :ffmpeg_stack => "v3.3.3", :preset => "empty" ) exported = transloadit.step "exported", "/s3/store", :use => [":original","encode","blur-fade"], :credentials => "YOUR_AWS_CREDENTIALS", :url_prefix => "https://demos.transloadit.com/" ) assembly = transloadit.assembly( :steps => [ :original, encode, blur-fade, exported ] ) # Add files to upload files = [] files.push("./big-buck-bunny-5s.mp4") # Start the Assembly response = assembly.create! *files until response.finished? sleep 1; response.reload! end if !response.error? # handle success end # pip install pytransloadit from transloadit import client tl = client.Transloadit('YOUR_TRANSLOADIT_KEY', 'YOUR_TRANSLOADIT_SECRET') assembly = tl.new_assembly() # Set Encoding Instructions assembly.add_step(':original', { 'robot': '/upload/handle' }) assembly.add_step('encode', { 'use': ':original', 'robot': '/video/encode', 'result': true, 'ffmpeg_stack': 'v3.3.3', 'preset': 'ipad-high' }) assembly.add_step('blur-fade', { 'use': 'encode', 'robot': '/video/encode', 'result': true, 'ffmpeg': {'filter_complex':'[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]','map':'[blurout]'}, 'ffmpeg_stack': 'v3.3.3', 'preset': 'empty' }) assembly.add_step('exported', { 'use': [':original','encode','blur-fade'], 'robot': '/s3/store', 'credentials': 'YOUR_AWS_CREDENTIALS', 'url_prefix': 'https://demos.transloadit.com/' }) # Add files to upload assembly.add_file(open('./big-buck-bunny-5s.mp4', 'rb')) # Start the Assembly assembly_response = assembly.create(retries=5, wait=True) print assembly_response.data.get('assembly_id') # or print assembly_response.data['assembly_id'] // go get gopkg.in/transloadit/go-sdk.v1 package main import ( "fmt" "gopkg.in/transloadit/go-sdk.v1" ) options := transloadit.DefaultConfig options.AuthKey = "YOUR_TRANSLOADIT_KEY" options.AuthSecret = "YOUR_TRANSLOADIT_SECRET" client := transloadit.NewClient(options) // Initialize new Assembly assembly := transloadit.NewAssembly() // Set Encoding Instructions assembly.AddStep(":original", map[string]interface{}{ "robot": "/upload/handle" }) assembly.AddStep("encode", map[string]interface{}{ "use": ":original", "robot": "/video/encode", "result": true, "ffmpeg_stack": "v3.3.3", "preset": "ipad-high" }) assembly.AddStep("blur-fade", map[string]interface{}{ "use": "encode", "robot": "/video/encode", "result": true, "ffmpeg": {"filter_complex":"[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]", "map":"[blurout]"}, "ffmpeg_stack": "v3.3.3", "preset": "empty" }) assembly.AddStep("exported", map[string]interface{}{ "use": [":original", "encode", "blur-fade"], "robot": "/s3/store", "credentials": "YOUR_AWS_CREDENTIALS", "url_prefix": "https://demos.transloadit.com/" }) // Add files to upload assembly.AddFile("myfile_1", "./big-buck-bunny-5s.mp4") // Start the Assembly info, err := client.StartAssembly(context.Background(), assembly) if err != nil { panic(err) } // All files have now been uploaded and the Assembly has started but no // results are available yet since the conversion has not finished. // WaitForAssembly provides functionality for polling until the Assembly // has ended. info, err = client.WaitForAssembly(context.Background(), info) if err != nil { panic(err) } fmt.Printf("You can check some results at: \n") fmt.Printf(" - %s\n", info.Results[":original"][0].SSLURL) fmt.Printf(" - %s\n", info.Results["encode"][0].SSLURL) fmt.Printf(" - %s\n", info.Results["blur-fade"][0].SSLURL) fmt.Printf(" - %s\n", info.Results["exported"][0].SSLURL) // compile 'com.transloadit.sdk:transloadit:0.1.5' import com.transloadit.sdk.Assembly; import com.transloadit.sdk.Transloadit; import com.transloadit.sdk.exceptions.LocalOperationException; import com.transloadit.sdk.exceptions.RequestException; import com.transloadit.sdk.response.AssemblyResponse; import java.io.File; import java.util.HashMap; import java.util.Map; public class Main { public static void main(String[] args) { Transloadit transloadit = new Transloadit("YOUR_TRANSLOADIT_KEY", "YOUR_TRANSLOADIT_SECRET"); Assembly assembly = transloadit.newAssembly(); // Set Encoding Instructions Map<String Object> originalStepOptions = new HashMap(); assembly.addStep(":original", "/upload/handle", originalStepOptions); Map<String Object> encodeStepOptions = new HashMap(); encodeStepOptions.put("use", ":original"); encodeStepOptions.put("result", true); encodeStepOptions.put("ffmpeg_stack", "v3.3.3"); encodeStepOptions.put("preset", "ipad-high"); assembly.addStep("encode", "/video/encode", encodeStepOptions); Map<String Object> blur-fadeStepOptions = new HashMap(); blur-fadeStepOptions.put("use", "encode"); blur-fadeStepOptions.put("result", true); blur-fadeStepOptions.put("ffmpeg", new HashMap(){{ put("filter_complex", "[0:v]split=2[base][blurred], [blurred]boxblur=luma_radius=50:chroma_radius=25:luma_power=1[blurred], [blurred]fade=type=in:start_time=3:duration=2:alpha=1[blurred-with-fadein], [base][blurred-with-fadein]overlay[blurout]"); put("map", "[blurout]"); }}); blur-fadeStepOptions.put("ffmpeg_stack", "v3.3.3"); blur-fadeStepOptions.put("preset", "empty"); assembly.addStep("blur-fade", "/video/encode", blur-fadeStepOptions); Map<String Object> exportedStepOptions = new HashMap(); exportedStepOptions.put("use", new String[]{":original", "encode", "blur-fade"}); exportedStepOptions.put("credentials", "YOUR_AWS_CREDENTIALS"); exportedStepOptions.put("url_prefix", "https://demos.transloadit.com/"); assembly.addStep("exported", "/s3/store", exportedStepOptions); // Add files to upload assembly.addFile(new File("./big-buck-bunny-5s.mp4")); // Start the Assembly try { AssemblyResponse response = assembly.save(); // Wait for Assembly to finish executing while (!response.isFinished()) { response = transloadit.getAssemblyByUrl(response.getSslUrl()); } System.out.println(response.getId()); System.out.println(response.getUrl()); System.out.println(response.json()); } catch (RequestException | LocalOperationException e) { // Handle exception here } } } So many ways to integrate Bulk imports Add one of our import Robots to acquire and transcode massive media libraries. Handling uploads Front-end integration We integrate with web browsers via our next-gen file uploader Uppy and SDKs for Android and iOS. Back-end integration Pingbacks Configure anotify_urlto let your server receive transcoding results JSON in thetransloaditPOST field.
Scales per-example losses with sample_weights and computes their average. tf.nn.compute_average_loss( per_example_loss, sample_weight=None, global_batch_size=None ) Used in the notebooks Usage with distribution strategy and custom training loop: with strategy.scope(): def compute_loss(labels, predictions, sample_weight=None): # If you are using a `Loss` class instead, set reduction to `NONE` so that # we can do the reduction afterwards and divide by global batch size. per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( labels, predictions) # Compute loss that is scaled by sample_weight and by global batch size. return tf.nn.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE) Args per_example_loss Per-example loss. sample_weight Optional weighting for each example. global_batch_size Optional global batch size value. Defaults to (size offirst dimension of losses) * (number of replicas). Returns Scalar loss value.
Face API - v1.0 This API is currently available in: Australia East - australiaeast.api.cognitive.microsoft.com Brazil South - brazilsouth.api.cognitive.microsoft.com Canada Central - canadacentral.api.cognitive.microsoft.com Central India - centralindia.api.cognitive.microsoft.com Central US - centralus.api.cognitive.microsoft.com East Asia - eastasia.api.cognitive.microsoft.com East US - eastus.api.cognitive.microsoft.com East US 2 - eastus2.api.cognitive.microsoft.com France Central - francecentral.api.cognitive.microsoft.com Japan East - japaneast.api.cognitive.microsoft.com Japan West - japanwest.api.cognitive.microsoft.com Korea Central - koreacentral.api.cognitive.microsoft.com North Central US - northcentralus.api.cognitive.microsoft.com North Europe - northeurope.api.cognitive.microsoft.com South Africa North - southafricanorth.api.cognitive.microsoft.com South Central US - southcentralus.api.cognitive.microsoft.com Southeast Asia - southeastasia.api.cognitive.microsoft.com UK South - uksouth.api.cognitive.microsoft.com West Central US - westcentralus.api.cognitive.microsoft.com West Europe - westeurope.api.cognitive.microsoft.com West US - westus.api.cognitive.microsoft.com West US 2 - westus2.api.cognitive.microsoft.com UAE North - uaenorth.api.cognitive.microsoft.com PersonGroup - Get Retrieve person group name, userData and recognitionModel. To get person information under this personGroup, use PersonGroup Person - List. Http Method GET Select the testing console in the region where you created your resource: West US West US 2 East US East US 2 West Central US South Central US West Europe North Europe Southeast Asia East Asia Australia East Brazil South Canada Central Central India UK South Japan East Central US France Central Korea Central Japan West North Central US South Africa North UAE North Request URL Request parameters personGroupId of the target person group. Return 'recognitionModel' or not. The default value is false. Request headers Request body Response 200 A successful call returns the person group's information. JSON fields in response body: Fields Type Description personGroupId String Target personGroupId provided in request parameter. name String Person group's display name. userData String User-provided data attached to this person group. recognitionModel String The 'recognitionModel' associated with this person group. This is only returned when 'returnRecognitionModel' is explicitly set as true. { "personGroupId": "sample_group", "name": "group1", "userData": "User-provided data attached to the person group.", "recognitionModel": "recognition_03" } Response 401 Error code and message returned in JSON: Error Code Error Message Description Unspecified Invalid subscription Key or user/plan is blocked. { "error": { "code": "Unspecified", "message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key." } } Response 403 { "error": { "statusCode": 403, "message": "Out of call volume quota. Quota will be replenished in 2 days." } } Response 404 Error code and message returned in JSON: Error Code Error Message Description PersonGroupNotFound Person group ID is invalid. Valid format should be a string composed by numbers, English letters in lower case, '-', '_', and no longer than 64 characters. PersonGroupNotFound Person group is not found. { "error": { "code": "PersonGroupNotFound", "message": "Person group is not found." } } Response 409 { "error": { "code": ConcurrentOperationConflict, "message": "There is a conflict operation on requested resource, please try later." } } Response 429 { "error": { "statusCode": 429, "message": "Rate limit is exceeded. Try again in 26 seconds." } } Code samples @ECHO OFF curl -v -X GET "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}?returnRecognitionModel=false" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}" using System; using System.Net.Http.Headers; using System.Text; using System.Net.Http; using System.Web; namespace CSHttpClientSample { static class Program { static void Main() { MakeRequest(); Console.WriteLine("Hit ENTER to exit..."); Console.ReadLine(); } static async void MakeRequest() { var client = new HttpClient(); var queryString = HttpUtility.ParseQueryString(string.Empty); // Request headers client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request parameters queryString["returnRecognitionModel"] = "false"; var uri = "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}?" + queryString; var response = await client.GetAsync(uri); } } } // // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) import java.net.URI; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.utils.URIBuilder; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; public class JavaSample { public static void main(String[] args) { HttpClient httpclient = HttpClients.createDefault(); try { URIBuilder builder = new URIBuilder("https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}"); builder.setParameter("returnRecognitionModel", "false"); URI uri = builder.build(); HttpGet request = new HttpGet(uri); request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request body StringEntity reqEntity = new StringEntity("{body}"); request.setEntity(reqEntity); HttpResponse response = httpclient.execute(request); HttpEntity entity = response.getEntity(); if (entity != null) { System.out.println(EntityUtils.toString(entity)); } } catch (Exception e) { System.out.println(e.getMessage()); } } } <!DOCTYPE html> <html> <head> <title>JSSample</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script> </head> <body> <script type="text/javascript"> $(function() { var params = { // Request parameters "returnRecognitionModel": "false", }; $.ajax({ url: "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}?" + $.param(params), beforeSend: function(xhrObj){ // Request headers xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}"); }, type: "GET", // Request body data: "{body}", }) .done(function(data) { alert("success"); }) .fail(function() { alert("error"); }); }); </script> </body> </html> #import <Foundation/Foundation.h> int main(int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSString* path = @"https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}"; NSArray* array = @[ // Request parameters @"entities=true", @"returnRecognitionModel=false", ]; NSString* string = [array componentsJoinedByString:@"&"]; path = [path stringByAppendingFormat:@"?%@", string]; NSLog(@"%@", path); NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]]; [_request setHTTPMethod:@"GET"]; // Request headers [_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"]; // Request body [_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]]; NSURLResponse *response = nil; NSError *error = nil; NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error]; if (nil != error) { NSLog(@"Error: %@", error); } else { NSError* error = nil; NSMutableDictionary* json = nil; NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding]; NSLog(@"%@", dataString); if (nil != _connectionData) { json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error]; } if (error || !json) { NSLog(@"Could not parse loaded json with error:%@", error); } NSLog(@"%@", json); _connectionData = nil; } [pool drain]; return 0; } <?php // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) require_once 'HTTP/Request2.php'; $request = new Http_Request2('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}'); $url = $request->getUrl(); $headers = array( // Request headers 'Ocp-Apim-Subscription-Key' => '{subscription key}', ); $request->setHeader($headers); $parameters = array( // Request parameters 'returnRecognitionModel' => 'false', ); $url->setQueryVariables($parameters); $request->setMethod(HTTP_Request2::METHOD_GET); // Request body $request->setBody("{body}"); try { $response = $request->send(); echo $response->getBody(); } catch (HttpException $ex) { echo $ex; } ?> ########### Python 2.7 ############# import httplib, urllib, base64 headers = { # Request headers 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.urlencode({ # Request parameters 'returnRecognitionModel': 'false', }) try: conn = httplib.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("GET", "/face/v1.0/persongroups/{personGroupId}?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### ########### Python 3.2 ############# import http.client, urllib.request, urllib.parse, urllib.error, base64 headers = { # Request headers 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.parse.urlencode({ # Request parameters 'returnRecognitionModel': 'false', }) try: conn = http.client.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("GET", "/face/v1.0/persongroups/{personGroupId}?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### require 'net/http' uri = URI('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}') uri.query = URI.encode_www_form({ # Request parameters 'returnRecognitionModel' => 'false' }) request = Net::HTTP::Get.new(uri.request_uri) # Request headers request['Ocp-Apim-Subscription-Key'] = '{subscription key}' # Request body request.body = "{body}" response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http| http.request(request) end puts response.body
Alpha channel in linear and radial gradients MauriceMeilleurlast edited by gferreira I've been using DrawBot to generate diagrams—specifically, Boolean diagrams of categories with fuzzy boundaries—and I would love to see both linear and radial gradients incorporate an alpha channel. continue here: https://github.com/typemytype/drawbot/issues/355 what does work: im = ImageObject() im.linearGradient((300, 300), (50, 0), (300, 0), color0=(1, 0, 0, 1), color1=(0, 1, 0, .3)) fontSize(100) text("Hello World", (0, 30)) image(im, (0, 0)) This creates a image with a gradient with opacity. But it will be a lot slower... MauriceMeilleurlast edited by @frederik Thanks (and thanks for opening this over in the GitHub repo). I'll give it a try later today. Speed isn't a factor in this case because I'm generating static diagrams.
Label sides of polygon Is it possible to label the sides of a polygon like in the following picture by one polygon command? Or do I need the text command as well? Or do you have any other ideas? Is it possible to label the sides of a polygon like in the following picture by one polygon command? Or do I need the text command as well? Or do you have any other ideas? It seems that there is no implemented solution to the issue. No problem, there is a quick solution using bare hands. (Although we can plot the polygon by the polygon command, there will be in the following a parallel plot of the edges...) def plotLabelledPolygon( vertices, labels, space=0.1 ): """for two lists of same length, of vertices in the real plane, A, B, C, ... and of labels a, b, ... for AB, BC, ... we associate a plot object consisting of the segments and the labels. The labels are positioned corresponding to the middle of the segments, there is some space left in between. """ p = plot( [] ) n = len( vertices ) if n < 3 or n != len( labels ): return p zVertices = [] for A in vertices: if type(A) in ( tuple, list ): zA = A[0] + i*A[1] elif A in CC : zA = A else : zA = None # feel free to extend for vectors & Co... zVertices.append( zA ) for ind in range(n): j = ind+1 if ind+1 < n else 0 A, zA = vertices[ ind ], zVertices[ ind ] B, zB = vertices[ j ], zVertices[ j ] p += line( [ A, B ] ) zBA = zB - zA if zBA == 0: print "zA = %s and zB = %s coincide :: IGNORED" % ( zA, zB ) continue zT = CC( zA + zBA / 2 - space * I * zBA / abs( zBA ) ) label = labels[ ind ] p += text( '$%s$' % label, ( zT.real(), zT.imag() ) ) return p Examples of usage: vertices = [ (-1,0), (1,0), (0,1) ] labels = [ 'a', 'b', 'c' ] # or simply abc p = plotLabelledPolygon( vertices, labels ) p.axes_color( 'gray' ) p.show() vertices = [ (0,0), (3,-3), (1,0), (3,3) ] labels = 'abcd' q = plotLabelledPolygon( vertices, labels, space=0.1 ) q.axes_color( 'gray' ) q.show() (The font, the aspect, the axes, ... can be controlled in the plot object.) Asked: 2017-06-13 12:14:31 -0600 Seen: 113 times Last updated: Jun 13 '17
Face API - v1.0 This API is currently available in: Australia East - australiaeast.api.cognitive.microsoft.com Brazil South - brazilsouth.api.cognitive.microsoft.com Canada Central - canadacentral.api.cognitive.microsoft.com Central India - centralindia.api.cognitive.microsoft.com Central US - centralus.api.cognitive.microsoft.com East Asia - eastasia.api.cognitive.microsoft.com East US - eastus.api.cognitive.microsoft.com East US 2 - eastus2.api.cognitive.microsoft.com France Central - francecentral.api.cognitive.microsoft.com Japan East - japaneast.api.cognitive.microsoft.com Japan West - japanwest.api.cognitive.microsoft.com Korea Central - koreacentral.api.cognitive.microsoft.com North Central US - northcentralus.api.cognitive.microsoft.com North Europe - northeurope.api.cognitive.microsoft.com South Africa North - southafricanorth.api.cognitive.microsoft.com South Central US - southcentralus.api.cognitive.microsoft.com Southeast Asia - southeastasia.api.cognitive.microsoft.com UK South - uksouth.api.cognitive.microsoft.com West Central US - westcentralus.api.cognitive.microsoft.com West Europe - westeurope.api.cognitive.microsoft.com West US - westus.api.cognitive.microsoft.com West US 2 - westus2.api.cognitive.microsoft.com UAE North - uaenorth.api.cognitive.microsoft.com PersonGroup - List List person groups’s personGroupId, name, userData and recognitionModel. Person groups are stored in alphabetical order of personGroupId. "start" parameter (string, optional) is a user-provided personGroupId value that returned entries have larger ids by string comparison. "start" set to empty to indicate return from the first item. "top" parameter (int, optional) specifies the number of entries to return. A maximal of 1000 entries can be returned in one call. To fetch more, you can specify "start" with the last retuned entry’s Id of the current call. "start=&top=" will return all 5 groups. "start=&top=2" will return "group1", "group2". "start=group2&top=3" will return "group3", "group4", "group5". Http Method GET Select the testing console in the region where you created your resource: West US West US 2 East US East US 2 West Central US South Central US West Europe North Europe Southeast Asia East Asia Australia East Brazil South Canada Central Central India UK South Japan East Central US France Central Korea Central Japan West North Central US South Africa North UAE North Request URL Request parameters List person groups from the least personGroupId greater than the "start". It contains no more than 64 characters. Default is empty. The number of person groups to list, ranging in [1, 1000]. Default is 1000. Return 'recognitionModel' or not. The default value is false. Request headers Request body Response 200 A successful call returns an array of person groups and their information (personGroupId, name and userData). JSON fields in response body: Fields Type Description personGroupId String personGroupId of the existing person groups, created in PersonGroup - Create. name String Person group's display name. userData String User-provided data attached to this person group. recognitionModel String The 'recognitionModel' associated with this person group. This is only returned when 'returnRecognitionModel' is explicitly set as true. [ { "personGroupId": "sample_group", "name": "group1", "userData": "User-provided data attached to the person group.", "recognitionModel": "recognition_01" }, { "personGroupId": "sample_group2", "name": "group2", "userData": "User-provided data attached to the person group.", "recognitionModel": "recognition_03" } ] Response 400 Error code and message returned in JSON: Error Code Error Message Description BadArgument Parameter top is invalid. Valid range is [1, 1000]. { "error": { "statusCode": 400, "message": "Parameter top is invalid." } } Response 401 Error code and message returned in JSON: Error Code Error Message Description Unspecified Invalid subscription Key or user/plan is blocked. { "error": { "code": "Unspecified", "message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key." } } Response 403 { "error": { "statusCode": 403, "message": "Out of call volume quota. Quota will be replenished in 2 days." } } Response 409 { "error": { "code": ConcurrentOperationConflict, "message": "There is a conflict operation on requested resource, please try later." } } Response 429 { "error": { "statusCode": 429, "message": "Rate limit is exceeded. Try again in 26 seconds." } } Code samples @ECHO OFF curl -v -X GET "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups?start={string}&top=1000&returnRecognitionModel=false" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}" using System; using System.Net.Http.Headers; using System.Text; using System.Net.Http; using System.Web; namespace CSHttpClientSample { static class Program { static void Main() { MakeRequest(); Console.WriteLine("Hit ENTER to exit..."); Console.ReadLine(); } static async void MakeRequest() { var client = new HttpClient(); var queryString = HttpUtility.ParseQueryString(string.Empty); // Request headers client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request parameters queryString["start"] = "{string}"; queryString["top"] = "1000"; queryString["returnRecognitionModel"] = "false"; var uri = "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups?" + queryString; var response = await client.GetAsync(uri); } } } // // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) import java.net.URI; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.utils.URIBuilder; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; public class JavaSample { public static void main(String[] args) { HttpClient httpclient = HttpClients.createDefault(); try { URIBuilder builder = new URIBuilder("https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups"); builder.setParameter("start", "{string}"); builder.setParameter("top", "1000"); builder.setParameter("returnRecognitionModel", "false"); URI uri = builder.build(); HttpGet request = new HttpGet(uri); request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}"); // Request body StringEntity reqEntity = new StringEntity("{body}"); request.setEntity(reqEntity); HttpResponse response = httpclient.execute(request); HttpEntity entity = response.getEntity(); if (entity != null) { System.out.println(EntityUtils.toString(entity)); } } catch (Exception e) { System.out.println(e.getMessage()); } } } <!DOCTYPE html> <html> <head> <title>JSSample</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script> </head> <body> <script type="text/javascript"> $(function() { var params = { // Request parameters "start": "{string}", "top": "1000", "returnRecognitionModel": "false", }; $.ajax({ url: "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups?" + $.param(params), beforeSend: function(xhrObj){ // Request headers xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}"); }, type: "GET", // Request body data: "{body}", }) .done(function(data) { alert("success"); }) .fail(function() { alert("error"); }); }); </script> </body> </html> #import <Foundation/Foundation.h> int main(int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSString* path = @"https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups"; NSArray* array = @[ // Request parameters @"entities=true", @"start={string}", @"top=1000", @"returnRecognitionModel=false", ]; NSString* string = [array componentsJoinedByString:@"&"]; path = [path stringByAppendingFormat:@"?%@", string]; NSLog(@"%@", path); NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]]; [_request setHTTPMethod:@"GET"]; // Request headers [_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"]; // Request body [_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]]; NSURLResponse *response = nil; NSError *error = nil; NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error]; if (nil != error) { NSLog(@"Error: %@", error); } else { NSError* error = nil; NSMutableDictionary* json = nil; NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding]; NSLog(@"%@", dataString); if (nil != _connectionData) { json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error]; } if (error || !json) { NSLog(@"Could not parse loaded json with error:%@", error); } NSLog(@"%@", json); _connectionData = nil; } [pool drain]; return 0; } <?php // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/) require_once 'HTTP/Request2.php'; $request = new Http_Request2('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups'); $url = $request->getUrl(); $headers = array( // Request headers 'Ocp-Apim-Subscription-Key' => '{subscription key}', ); $request->setHeader($headers); $parameters = array( // Request parameters 'start' => '{string}', 'top' => '1000', 'returnRecognitionModel' => 'false', ); $url->setQueryVariables($parameters); $request->setMethod(HTTP_Request2::METHOD_GET); // Request body $request->setBody("{body}"); try { $response = $request->send(); echo $response->getBody(); } catch (HttpException $ex) { echo $ex; } ?> ########### Python 2.7 ############# import httplib, urllib, base64 headers = { # Request headers 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.urlencode({ # Request parameters 'start': '{string}', 'top': '1000', 'returnRecognitionModel': 'false', }) try: conn = httplib.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("GET", "/face/v1.0/persongroups?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### ########### Python 3.2 ############# import http.client, urllib.request, urllib.parse, urllib.error, base64 headers = { # Request headers 'Ocp-Apim-Subscription-Key': '{subscription key}', } params = urllib.parse.urlencode({ # Request parameters 'start': '{string}', 'top': '1000', 'returnRecognitionModel': 'false', }) try: conn = http.client.HTTPSConnection('northeurope.api.cognitive.microsoft.com') conn.request("GET", "/face/v1.0/persongroups?%s" % params, "{body}", headers) response = conn.getresponse() data = response.read() print(data) conn.close() except Exception as e: print("[Errno {0}] {1}".format(e.errno, e.strerror)) #################################### require 'net/http' uri = URI('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups') uri.query = URI.encode_www_form({ # Request parameters 'start' => '{string}', 'top' => '1000', 'returnRecognitionModel' => 'false' }) request = Net::HTTP::Get.new(uri.request_uri) # Request headers request['Ocp-Apim-Subscription-Key'] = '{subscription key}' # Request body request.body = "{body}" response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http| http.request(request) end puts response.body
5 minutes Background However, I personally found that EIN was a huge pain to work with, and I mostly ended up working with the web-interface anyway. It is a bit redundant to do so, given that at-least for my purposes, the end result was a LaTeX document. Breaking down the rest of my requirements went a bit like this: What exports well to TeX? Org, Markdown, anything which goes into pandoc What displays code really well? LaTeX, Markdown, Org What allows easy visualization of code snippets? Rmarkdown, RStudio,JupyterHub, Orgwith babel Clearly, orgmode is the common denominator, and ergo, a perfect JupyterHub alternative. Setup Throughout this post I will assume the following structure: tree tmp mkdir -p tmp/images touch tmp/myFakeJupyter.org tmp ├── images └── myFakeJupyter.org 1 directory, 1 file As is evident, we have a folder tmp which will have all the things we need fordealing with our setup. Virtual Python Without waxing too eloquent on the whole reason behind doing this, since I will rant about virtual python management systems elsewhere, here I will simply describe my preferred method, which is using poetry. # In a folder above tmppoetry initpoetry add numpy matplotlib scipy pandas<metadata lang=Batchfile prob=0.06 /> # Same place as the poetry files echo "layout_poetry()" >> .envrc Note: We can nest an arbitrary number of the tmpstructures under a single place we define the poetry setup I prefer using direnvto ensure that I never forget to hook into the right environment Orgmode This is not an introduction to org, however in particular, there are some basic settings to keep in mind to make sure the set-up works as expected. Indentation Python is notoriously weird about whitespace, so we will ensure that our exportprocess does not mangle whitespace and offend the python interpreter. We willhave the following line at the top of our orgmode file: # -*- org-src-preserve-indentation: t; org-edit-src-content: 0; -*- Note: this post is actually generating the file being discussed here by You can get the whole file here TeX Settings These are also basically optional, but at the very least you will need the following: #+author: Rohit Goswami#+title: Whatever#+subtitle: Wittier line about whatever#+date: \today#+OPTIONS: toc:nil I actually use a lot of math using the TeX input mode in Emacs, so I like thefollowing settings for math: # For math display #+LATEX_HEADER: \usepackage{amsfonts} #+LATEX_HEADER: \usepackage{unicode-math} There are a bunch of other settings which may be used, but these are the bare minimum, more on that would be in a snippet anyway. Note: rendering math in the orgmodefile in this manner requires that we useXeTeXto compile the final file Org-Python We essentially need to ensure that: Babel uses our virtual python The same session is used for each block We will get our poetry python pretty easily: which python Now we will use this as a common header-arg passed into the property drawer tomake sure we don’t need to set them in every code block. We can use the following structure in our file: \* Python Stuff :PROPERTIES: :header-args: :python /home/haozeke/.cache/pypoetry/virtualenvs/test-2aLV_5DQ-py3.8/bin/python :session One :results output :exports both :END: Now we can simply work with code as we normally would \#+BEGIN_SRC python print("Hello World") \#+END_SRC Note: For some reason, this property needs to be set on everyheading (as of Feb 13 2020) In the actual file you will want to remove extraneous \ symbols: \* → * \#+BEGIN_SRC → #+BEGIN_SRC \#+END_SRC → #+END_SRC Python Images and Orgmode To view images in orgmode as we would in a JupyterLab notebook, we will use aslight trick. We will ensure that the code block returns a file object with the arguments The code block should end with a print statement to actually generate the file name So we want a code block like this: #+BEGIN_SRC python :results output file :exports both import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_circles X, y = make_circles(100, factor=.1, noise=.1) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn') plt.xlabel('x1') plt.ylabel('x2') plt.savefig('images/plotCircles.png', dpi = 300) print('images/plotCircles.png') # return filename to org-mode #+end_src Which would give the following when executed: #+RESULTS: [[file:images/plotCircles.png]] Since that looks pretty ugly, this will actually look like this: import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_circles X, y = make_circles(100, factor=.1, noise=.1) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn') plt.xlabel('x1') plt.ylabel('x2') plt.savefig('images/plotCircles.png', dpi = 300) print('images/plotCircles.png') # return filename to org-mode Bonus A better way to simulate standard jupyter workflows is to just specify theproperties once at the beginning. #+PROPERTY: header-args:python :python /home/haozeke/.cache/pypoetry/virtualenvs/test-2aLV_5DQ-py3.8/bin/python :session One :results output :exports both This setup circumvents having to set the properties per sub-tree, though for very large projects, it is useful to use different processes. Conclusions The last step is of course to export the file as to a TeXfile and then compile that with something likelatexmk -pdfxe -shell-escape file.tex There are a million and one variations of this of course, but this is enough to get started. The whole file is also reproduced here. Comments The older commenting system was implemented with utteranc.es as seen below.
Fantastic! Now we’ll switch gears and show you an iterative algorithm to sum the digits of a number. This function, sum_digits(), produces the sum of all the digits in a positive number as if they were each a single number: # Linear - O(N), where "N" is the number of digits in the number def sum_digits(n): if n < 0: ValueError("Inputs 0 or greater only!") result = 0 while n is not 0: result += n % 10 n = n // 10 return result + n sum_digits(12) # 1 + 2 # 3 sum_digits(552) # 5 + 5 + 2 # 12 sum_digits(123456789) # 1 + 2 + 3 + 4... # 45 Instructions 1. Implement your version of sum_digits() which has the same functionality using recursive calls!
Estoy intentando agregar una callback de input a un AVAudioEngine's AVAudioEngine, pero nunca se llama. La esperanza es que puedo usar AVAudioEngine para administrar el AUGraph básico para iOS y OS X y puedo ejecutar mi propio código en el medio. También intenté instalar un toque en el nodo de input, pero no puedo cambiar la longitud del búfer. Hice una aplicación de iOS de vista única y puse este código en viewDidLoad : _audioEngine = [AVAudioEngine new]; _inputNode = _audioEngine.inputNode; _outputNode = _audioEngine.outputNode; AURenderCallbackStruct inputCallback; inputCallback.inputProc = inputCalbackProc; inputCallback.inputProcRefCon = (__bridge void *)(self); AudioUnitSetProperty(_inputNode.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &inputCallback, sizeof(inputCallback)); [_audioEngine startAndReturnError:nil]; La callback de procesamiento se define así: OSStatus inputCalbackProc (void * inRefCon, AudioUnitRenderActionFlags * ioActionFlags, const AudioTimeStamp * inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList * ioData) { printf("Called"); return noErr; } He logrado instalar una callback de procesamiento en la unidad de audio del nodo de salida de la misma manera, pero nunca se invoca mi callback de input. He verificado que la unidad de audio del nodo de input es la misma que la unidad de audio del nodo de salida, lo que sugiere que el gráfico se ha configurado correctamente. También intenté configurar kAudioOutputUnitProperty_EnableIO en la unidad RemoteIO (inputNode.audioUnit) ¿Alguien tiene alguna sugerencia? ¿Puedes mostrar tu código para habilitar E / S? kAudioUnitScope_Input en count que debería estar en el scope kAudioUnitScope_Input y el elemento 1 . RemoteIO en realidad no proporciona una callback cuando está list para ser procesada. Como es el mismo hardware que la salida, puede representar la unidad de input cuando la unidad de salida se representa. Hay dos cosas que veo que podrían estar equivocadas. No metes la session de audio. Me pregunto si lo ha configurado para algo que requiere una input: let audioSession = AVAudioSession.shanetworkingInstance() audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord) audioSession.setActive(true) También kAudioOutputUnitProperty_EnableIO debe aplicarse a kAudioUnitScope_Input porque puede habilitar / deshabilitar input o salida de forma independiente. AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, bus1, &enableInput,sizeof(enableInput)) Hay otras cosas que debe verificar, pero dijo que tenía una callback en otro lugar, por lo que son específicas de esta unidad y el bus. Nota: este código NO está completo y es solo los puntos principales y relevantes para configurar una session de audio y habilitar la input.
Проблема в том что добавление роли при нажатии работает. @commands.Cog.listener() async def on_raw_reaction_add(self, payload): if payload.message_id == list.GAMEPOST_ID: channel = self.client.get_channel(payload.channel_id) # получаем объект канала guild = await self.client.fetch_guild(payload.guild_id) emoji = str(payload.emoji) role = discord.utils.get(guild.roles, id = list.GAMEROLES[emoji]) await payload.member.add_roles(role) print('[ROLE] User {0.display_name} has been granted with role {1.name}'.format(payload.member, role)) А при практичски идентичном коде, удаление роли вызывает ошибку @commands.Cog.listener() async def on_raw_reaction_remove(self, payload): if payload.message_id == list.GAMEPOST_ID: channel = self.client.get_channel(payload.channel_id) # получаем объект канала guild = await self.client.fetch_guild(payload.guild_id) emoji = str(payload.emoji) role = discord.utils.get(guild.roles, id = list.GAMEROLES[emoji]) await payload.member.remove_roles(role) print('[ROLE] Role {1.name} has been remove for user {0.display_name}'.format(payload.member, role)) А вот собствено говоря ошибка spoiler Ignoring exception in on_raw_reaction_remove Traceback (most recent call last): File "D:\Programs\Python 3.9.2\lib\site-packages\discord\client.py", line 333, in _run_event await coro(*args, **kwargs) File "d:\Warden\Project\Python\client\cogs\roles.py", line 90, in on_raw_reaction_remove await payload.member.remove_roles(role) AttributeError: 'NoneType' object has no attribute 'remove_roles'
Hàm oct() trong Python Hàm oct() là một trong các hàm tích hợp sẵn trong Python, được sử dụng để chuyển đổi một số nguyên thành dạng bát phân tương ứng. Hàm oct() có cú pháp ra sao, có những tham số nào, bạn hãy cùng Quantrimang tìm hiểu trong bài viết này nhé. Cú pháp hàm oct() trong Python oct(x) Các tham số của hàm oct(): oct() có duy nhất một tham số: x: là số nguyên (đối tượng int) x có thể là: Một số nguyên (nhị phân, thập phân hoặc thập lục phân). Nếu xkhông phải là số nguyên, cần sử dụng __index __() để trả về số nguyên Giá trị trả về từ oct() Hàm oct() chuyển đổi một số nguyên thành số bát phân tương ứng. Ví dụ 1: Hàm oct() hoạt động thế nào? # thập phân sang bát phân print('oct(10) co gia tri la:', oct(10)) # nhị phân sang bát phân print('oct(0b101) co gia tri la:', oct(0b101)) # thập lục phân sang bát phân print('oct(0XA) co gia tri la:', oct(0XA)) Chạy chương trình, kết quả trả về là: oct(10) co gia tri la: 0o12 oct(0b101) co gia tri la: 0o5 oct(0XA) co gia tri la: 0o12 Ví dụ 2: oct() với các đối tượng tùy chỉnh ass Person: age = 23 def __index__(self): return self.age def __int__(self): return self.age person = Person() print('The oct is:', oct(person)) Chạy chương trình, kết quả trả về là: The oct is: 0o27 Ở đây, lớp Person thực hiện __index __() và __int __(). Đó là lý do tại sao chúng ta có thể sử dụng oct() trên các đối tượng của Person. Lưu ý: Để tương thích, bạn nên triển khai __int __ () và __index __ () với cùng một output. Xem thêm: Các hàm Python tích hợp sẵn
RSpec, JRuby, Mocking, and Multiple Interfaces Posted by Nick Sieger Fri, 01 Dec 2006 18:56:00 GMT The prospect of doing behavior-driven development in Java has just taken a step closer with the news of RSpec running on JRuby. This is already a big step that will have an impact on Ruby and Java programmers alike in a number of ways. However, it could be even better. RSpec has a nice, intuitive mocking API, which will unfortunately, at the present time, be useless when working with java objects. It would be awesome to try to get it to work, though. Some possibilities: Map to JMock and use JMock under the hood. Not a very attractive option for a number of reasons, but mainly because add-on bridging layers are complex and should be avoided. Improve ability for JRuby to implement any number of Java interfaces dynamically. Consider this spec. It’s trivial, but bear with me. context "A TaskRunner" do setup do @task = mock("Runnable") @task_runner = TaskRunner.new(@task) end specify "runs a task when executed" do @task.should_receive(:run) @task_runner.execute end end This spec might be satisfied by the following Java code: public class TaskRunner { private Runnable task; public TaskRunner(Runnable r) { this.task = r; } public void execute() { task.run(); } } Notice how I defined the @task in the spec above. This is the normal way of mocking in RSpec, and the example illustrates how I think JRuby should handle interfaces in Java: by duck-typing them. Basically, the RSpec mock should act like a Java Runnable because I’ve defined a run method on it (in this case implicitly with @task.should_receive(:run)). JRuby could wrap a dynamic invocation proxy around any Ruby object just before passing it into a Java method invocation. Without doing any type- or method-checking up front. Just define the proxy as implementing the interface required by the Java method signature, and let the JRuby runtime do its thing, and attempt to resolve methods as they’re invoked. Possibly falling back to method_missing, even! Note that this would also make moot the multiple interface syntax discussion, because you’d never have to declare an object in JRuby as implementing any particular interface. Just define the appropriately named methods with the proper arity, and you’re done. Maybe you don’t even need to declare all of them, if they never get called for your usage! This is the Ruby Way, and would be a completely natural extension to the way Java objects are manipulated in JRuby today, not to mention extremely concise and powerful. This would also allow RSpec mocking to just work, at least for Java interface types, which would be way cool. Charlie has a Swing demo that he frequently gives when talking about JRuby. Under the new proposal, it would look more like this: require 'java' frame = javax.swing.JFrame.new("Hello") frame.setSize(200,200) frame.show button = javax.swing.JButton.new("OK!") frame.add(button) frame.show def actionPerformed(event) event.source.text = "Pressed!" end button.addActionListener self With luck, this approach will be coming to JRuby very soon.
Overview Pyramid is a Python framework that is the spiritual successor to Pylon and Zope, frameworks popular in the mid-to-late 2000s. Pyramid is supported with on v6+ platforms using any Python version from 2.7 onward with Passenger. Quickstart All commands are done from the terminal for convenience. PREREQUISITE: create a suitable Passenger-compatible filesystem layout cd /var/www && mkdir -p pyramid/{tmp,public} OPTIONAL PREREQUISITE: determine a suitable Python version using pyenv cd pyramid && pyenv local 3.3.5 Install Pyramid. In the above example, using pyenv to set 3.3.5, Pyramid will be installed as a Python 3.3.5 egg. pip install pyramid --no-use-wheel Create a startup file named passenger_wsgi.py, the de factor startup for Python-based apps. This is a simple “Hello World” application with routing that will, depending upon the route, respond with it. You can use vim or nano as a text-editor from the shell. from wsgiref.simple_server import make_server from pyramid.config import Configurator from pyramid.response import Response def hello_world(request): return Response('Hello %(name)s!' % request.matchdict) config = Configurator() config.add_route('hello', '/hello/{name}') config.add_view(hello_world, route_name='hello') application = config.make_wsgi_app() if __name__ == '__main__': server = make_server('0.0.0.0', 8080, app) server.serve_forever() Connect public/to a subdomain Inform Passenger to serve this as a Python application: echo "PassengerPython /.socket/python/shims/python" > public/.htaccess Enjoy! Viewing launcher errors In the event an application fails to launch, errors will be logged to passenger.log. See KB: Viewing launcher errors. Restarting Like any Passenger app, you can follow the general Passenger guidelines to restart an app.
なにをしたか simplejsonをインストールしようとしたところ、pipのバージョンが古いといわれたのでpipを以下コマンドでアップデートしようとしてみた。するとこんなエラーが。 pip install --user --upgrade pip ubuntu@ip-172-31-0-101:~$ pip Traceback (most recent call last): File "/usr/bin/pip3", line 9, in &lt;module&gt; from pip import main ImportError: cannot import name 'main' åŽŸå› ç¢ºèªä¸­ã€‚ä¸€å®šã®ubuntuバージョンで実行すると壊れる? 【追記 2018/05/20】 今しがた利用してみたところ普通に使えた。 参考